Important Messages, such as Service Disruption.

Our office hours are Monday to Friday, 9 AM to 5 PM UK time. We will try to give assistance to those people not living in the UK outside of those hours if possible.

We are closed for the Christmas holidays from the evening of Wednesday 20 December 2017 through to Tuesday 2 January 2018.

We would like to wish everyone a merry Christmas and happy New Year!

Using J-Say - Ten Years On!

I was browsing the web earlier today, viewing the different websites containing mention of J-Say, a product which does a great deal more than linking JAWS for Windows and Dragon NaturallySpeaking together. This included some Youtube videos from people who are using the technology. Reading these pages reminded me that J-Say is now in its tenth year of development and I thought it was worth writing a few paragraphs to celebrate that fact!

As the Developer of J-Say for Astec, I am very proud of what the product has achieved during that time. I've been to both the homes and workplaces of people who are using a computer who otherwise would never ever have the chance of doing so. These would be people with physical disabilities as well as a visual impairment given that J-Say allows hands-free control of the computer. You do not need to be in the same room as the computer to use J-Say if you are using wireless or bluetooth technology, and I would suggest it is the ideal solution for people who cannot use the keyboard. People have published books or written stage plays and television scripts with it. They've used it to write letters, manage email or do the grocery shopping, whatever is their preference.

There have been attempts to create screen-reading access to the speech recognition capabilities of Windows Vista and 7, but as I understand it, this does not give you complete hands-free control of the computer, nor is the speech engine anywhere near as accurate as that which Dragon NaturallySpeaking provides. A quick Google search will tell you that.

But the question which I saw most often was, what does J-Say do that using JAWS alongside Dragon would not?

J-Say is much much more than a number of JAWS scripts. It has a number of additional components which offer many advantages:

  • Complete echo back of your dictation. We call this "dynamic echo". As you speak, the computer echoes back what you say, important as you need to know that what you have said has been successfully recognised. This text can also be sent to a Braille display.
  • Screen-Reading. I think every JAWS screen-reading function has an equivalent voice command. This enables a person to screen-read in the same way as a keyboard user would be able to do, (such as to speak or spell the current line or to hear the entire document), together with hearing information about tables, using the Research It module or something as recent as Flexible Web for JAWS 14. I would be surprised if there is anything we don't have.
  • Easier Text Selection. Dragon users have the ability to select text if they can see the "from and through" points, so you can ask it to highlight text from the beginning of a passage to the end. If you cannot see what is on the screen, it is unlikely that functionality can be used quickly and with any degree of success. J-say contains the ability to make that process easier.
  • Flattening the G U I. The "My Words" facility of J-Say is the primary way we suggest people educate the software so it learns how words and phrases are pronounced. Users traditionally need to interact with Dialog Boxes within the dragon Vocabulary Editor so as to add words commonly used to the speech vocabulary. However, J-Say allows for direct access to a text file in Microsoft Notepad, where all such words are typed. This makes it easy for not only the user to enter such terms but also people supporting them, who may have very little computer experience.
  • Starting Dragon automatically. Dragon does not have the ability to start automatically when Windows is loaded. J-Say contains an option for that.
  • Shortcuts, Text Notes and Contacts. These are utilities which allow quick and direct access by voice to folders and documents, together with being able to reproduce phrases and long text passages quickly. Contacts allow for successful recalling of email addresses which Dragon may not otherwise be able to understand.
  • Custom Menu. This is a menu of options anyone can build, such as a Trainer or Support Consultant, to allow a user to access frequently used programs, documents, websites or commands from a single list. No scripting or programming knowledge necessary.
  • Access to the Correction System. Dragon contains a "Correction Box" which is another method by which the software can be educated to learn how words and phrases are pronounced. J-Say provides full access to this by having alternative choices read back or a new spelling can be entered by voice.
  • Learning Module. J-Say comes equipped with a full tutorial teaching the concepts of using J-Say from a person's first dictation exercise through to complex formatting, managing email and using the internet.
  • Additional Functions. There are many other features which have been developed specifically for the voice recognition user who cannot see. These include, for example, the ability to hear which programs are active on the computer. Dragon (and Windows Speech Recognition) have the ability for a user to be able to switch to a given application using a voice command. But this is only useful if you know which programs are running! J-Say will advise you of this and provide you with an easy way of being able to switch to the one you want at any time.
  • Complete Dragon Customisation. In order for Dragon to perform optimally with JAWS, 30 options need to be set within Dragon's "Options" Dialog Box. This is now seemlessly achieved in the background each time a user creates a new set of speech files, what we call, a new "Voice Profile".
  • Independent Reading of the Training Text. This is now far less important than it used to be, but J-Say does read on demand the training text which could be spoken when a user customises Dragon for his or her use. J-Say intelligently works out how much of the text has been spoken and prompts the user with the next few words.

In summary, J-Say has come a long way in its ten year history. What we have is a product which is developed with the blind user in mind, just as our upcoming Say-MAGic product will be developed with core input from low vision users. J-Say is backed up by remote support. A user can always obtain assistance (or even training) using remote support if that is needed.

If you have questions about J-Say, please visit Astec (the J-Say developers) at

You can also contact Next Generation Technologies in North America and Canada:

I would like to thank our beta testers and our many users for all their suggestions for feature improvements and for their hard work in testing the product to ensure it is stable and reliable.

I do hope that users of J-Say continue to benefit from it for many years to come, and here's to the next ten years!