Does any of you use speech recognition tools for writing code? I. E. Dragon and similar?

Does any of you use speech recognition tools for writing code? I. E. Dragon and similar?

I had this dream where I could talk to the IDE, and it was way smarter than just taking literal text input. I could talk to it like an AI, and say “add a private property string to the class, named “something” with setter but no getter. Initialize it in create to empty string.”

Now I can’t stop thinking about having such an IDE.

#wishfulthinking

14 thoughts on “Does any of you use speech recognition tools for writing code? I. E. Dragon and similar?


  1. I’ve been thinking of the same thing. Basically an AI pair programmer to automate tasks. There is one that I know of (but doesn’t use speech) called Kite. It has a sidebar that auto looks up more info about what you are currently coding and shows it to you in addition to smarter code completion using popularity ranking instead of alphabetical. youtube.com – Kite – Your programming copilot Microsoft also has some AI code called DeepCoder which can write about 5 lines of code using what it steals off of StackOverflow.

    Like


  2. Computers doing most of the work of converting a specification into code has been “just around the corner” for decades. For a while, with Delphi, we got near that goal at least for the user interface. But it never happened because, face it, a programming language is the specification that the computer translates into binary code.


    It never happened with natural languages, except for very limited use cases. Now we are to believe the additional complexity of voice recognition on top of that, will make this dream come true?


    I for one remain sceptical.

    Like


  3. There are indeed some obstacles.


    1. Basic speech recognition – which IMO now is at a level where the semantic and syntactic understanding is (with training) fairly accurate and approaching fluent in real time.


    2. Context understanding, where the speech object references are identified and understood, so that the “its”, “the variables”, “current” and so on can be identified, which also seems to be possible to handle fairly well.


    3. Verb understanding, where the operations on references are understood, also appear to be fairly well covered by speech recognition.


    4. Code understanding, where the context references connects to the right elements in the code, which seems quite feasible – although context clarification might need the option to pick from possible “targets”


    5. Code change understanding, where the spoken actions can be translated into changes to the current code.


    The three first points appear entirely possible, IMO. The fourth also seems doable, and the fifth h is perhaps the most complex one, as you will need to both identify where something is in the code, and how to change or add code in a syntactically correct way.


    That would at least mean a lot of relief with regards to the physical strain of working with a mouse and keyboard.


    The sixth step would be to go to a higher abstraction level when describing the initial design, where the AI would be able to identify and select suitable patterns, models and algorithms – which is something that still is somewhere down the road.


    I would predict that initially the AI would be fairly domain specific, but imagine the potential leap in productivity!

    Like


  4. Lars Fosdal I had the idea to develop a speech frontend for class modelling.


    I use Dragon on a regular basis and have developed some outlook addins that allow me to use native language in very specific areas (eg. sorting into folders or moving to folders). Even though those addins are very productive especcially in case of deep folder hierarchies, my overall impression is that speech efficiency decreases with the broadening of the context in which it is implemented. To put it in another way you won’t be happy with the result of speech enabling coding unless the AI driving it narrows down the context so much that misunderstandings will be prevented. As soon as misunderstanding starts – inefficiency will be troublesome and you will prefer to do everything by hand again. But yes – someone will develop such in the future – for sure. A good speech frontend fpr modelling classes would give me most benefit for the effort at the moment – I think.

    Like

Leave a Reply to Mark Dueck Cancel reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.