I really find LVSpeak useful. I use it every day and it rocks. I have been taking notes while I use it and I have come up with a wish list of features and functionality that I wish LVSpeak had. With respect, I submit my wishlist...
Voice Command Set Wish List
Block Diagram
Block diagram cleanup (implement both selective or clean-up all)
Show error list
Run VI, Save VI, New VI, make subvi (from selection or if nothing selected, from nothing)
UI Front Panel Commands
Align front panel items (all the align functionality of the block diagram!) like “vertical gap” and all the distribution commands
Convert all controls and indicators to system controls at once (“system style”)
Change colors of hovered target (common colors and customizable ones) transparent at least
Activate controls at runtime
Initialize to default value (all selected)
Improvements to LabVIEW Speak
Speak Interface
The option to select a different hotkey of my choice, right from the settings. I would like to pick one of the extra buttons on my mouse, for instance. I use the control key for a lot of other things in labview and windows. I don’t want it performing commands based on interpreted noise during that time.
Ability to search available commands (preferably intelligently) in the command list window. I would like to be able to click into a text box in the command list and type text to search. It should begin the search as I type, like the quick drop menu or a google search.
Labview Speak Panel – Slimmer and dockable to windows so I can dock it to the bottom of the VI I am working on. It doesn’t need to be a big box like that, I just need to see a small LED and the interpreted command on one line. Maybe you could select different window styles in settings? If labview speak was able to write to that status text on the bottom border of the VI, that would be the sweetest! Maybe change an icon to green or red when speak activated (like the current panel LED functionality).
The option to only press down the hotkey while speaking, rather than all the way until the requested action has been taken, would be very nice.
Speak Engine
I would like to not need to say the entire command for it to be able to place it. Rather be able to say the first few words of it and if those words didn’t match any other command at all to go ahead and inform me of the guess so I could quit speaking. If there were multiple vi’s with that command it could choose the first on its list or something. I am sure it would be wise to have this only apply to commands of 2 words or more, including the freaking fast drop stuff. This would work sort of like typing your search in google and it already having an idea of what you are trying to search for. I know that the Microsoft voice detection engine only analyzes your statement once there is a second pause so it can’t start guessing with each new word. What I am saying is sometimes I don’t remember the full name of a command I and I would like to be able to only say what I remember and it would get a little closer to getting the command right. For instance it would be nice to be able to say “open create” and it know that I meant “open/create/replace file.vi”. There seems to be some fill in the blank type functionality but usually it guesses the completely wrong function or nothing at all. Perhaps this ‘guessing’ could be a feature that you could enable or disable based on your preference.