In addition to developing intelligent products for the enterprises, we are actively involved in the artificial intelligence research too. We have developed a very innovative symbolic model of the human brain to describe how it learns, how it operates and how it thinks and emotes. The pre-print version of our paper, titled RTOP: A Conceptual and Computational Framework for General Intelligence, discusses in detail our philosophy and approach towards achieving artificial general intelligence.

We have taken some concrete steps to progress our conceptual model into a real implementation. The RTOP model discussed in the paper is implemented to a big extent in a program called RTOP Agent Program. It captures images and audio from the computer camera and the computer microphone respectively. It simulates hunger and comfort as internal senses. An interacting user can increase or decrease the comfort through a user interface. Foreground processing captures the inputs and actions and saves them to the appropriate memory nodes and creates temporal observation paths and observation trees while Background processing detects any substantial changes in the audio or visual input and accordingly moves the agent’s attention. The actions built in the program are attention change, image focus change and speech action, which are mostly generated randomly. The program makes predictions by looking up the memory node of the ongoing observation in the knowledge base and retrieving the relevant observation trees along with the connection probabilities and the expected changes in pleasure pain. The program takes learned actions based on happiness. It also implements some of the principles of generalized learning. Further details about the program are mentioned in the paper.

We are looking for partners who share our vision and are willing to contribute in whatever way possible in the progress of this research. If you are interested in joining us in our efforts, you can write to us at