It is believed that 80% of sensory information is visual. Therefore, some form of Visual Processing is an absolute necessity for any attempt to build an artificial brain.
Albeit enormous breakthrough in related fields (Pattern Recognition, Computer Vision), state-of-art techniques are far from what our brain is capable of. Why? Simply because there is still a lot we ignore about how the brain process visual stimuli. We think further study is a necessity.

Mainstream pattern recognition researches for visual processing constructs mapping from images to language. For example, feeding a picture of an apple as an input, the mapping should output “apple”.

Our current applied researches use the brain’s visual cognitive architecture as a basis. Our brain processes visual information in a parallel fashion: colors, shapes, movements,… all have their own different circuitry.
But visual information alone is insufficient for reliable and performant recognition capability. The reason why we, humans, are so good at pattern recognition is because we use all our sensory information, knowledge and experience.
Therefore, we need to make those computable and integrated in our artificial brain.

Visual processing research, in Hagiwara laboratory, ranges from pattern recognition to understanding and interpretation. The last step is to merge our visual system with our language and kansei systems.

A sample of our past and current research:

         Image generation from sentences
 

Mainstream research maps images to texts. Very few construct the inverse mapping, this research is one of them.
The input document is analyzed, salient words are extracted and serve as keys of an image database (part of landscape, such as a mountain, a lake,etc).
These image components are then synthetized into the final image.