Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If the command is not restricted to text only, but use text and geometrical context, you can remove a lot of ambiguities. This is often done in video games with contextual interactions.

After all, using the Blender GUI, you can do a lot using only a 2D mouse coordinate and two boutons. So 2D mouse coordinates and text could be better.

A nice evolution would be an AI model that can understand natural language instructions, while taking into account where your mouse pointer, how the model is zoomed and oriented, and that has geometric insight of the 3D scene built so far.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: