The Operating System class I am TAing comes to the file system part, in where a virtual file system layer above the different file systems is introduced.
On the way driving to Qualcomm New Jersey, a question came into my mind: Is there a virtual action manipulation layer between our mind and our motor system? For example, our mind may pass a command: grab the cup on the desk. After the interpretation of he virtual action manipulation layer, the same command is translated into several different low level commands to different motor systems:
To eyes, the command becomes Eye_Grabcup, which actually will focus on the cup;
To legs, the command becomes Leg_Grabcup, which may drive the legs walk towards the cup;
And to hands, the command becomes Hand_Grabcup, which is go and grad the cup!
Now when we talking about the Vision Executive and the Language Executive, is it reasonable to put a virtual action manipulation layer over them? For example, "focus" is a actual action of VE, and "search related knowledge" is the action of LE, both of them share the same virtual layer function: "Attend!";
Finally we completed the Qualcomm presentation~~ It always feels good to have a chance to share the idea and our preliminary results. The researchers from Qualcomm and other students attending the finalists also gave us a lot of valuable suggestions. We appreciate of every comment and still a loooot of work ahead~~