AI-Based Automation and Quantification Solution to Preclinical and Animal Motion Experiments
Animals move according to commands from the brain. Careful quantification of the motion allows for reasoning about brain functions. To quantify the motion of a mouse in a laboratory, those who conduct the experiment have to continue to observe the mouse visually, play the recorded videos repeatedly, and record and analyze the results. Visual analysis of the motion involves subjective viewpoints and errors. The present study was conducted by letting the AI learn the individual body parts of the mouse and automatically observe and quantify the motion. The results of the present study enabled the researchers to increase the accuracy of the animal motion experiments and find new motions that had not been revealed before.
‘Avatar System’ for AI-Based Real-Time Observation
The AI system developed in the present research project allows for real-time observation of mouse motion. Through AI, a mouse body was reconstructed in the form of an action skeleton, like an avatar, based on information about the mouse body and organs. The action skeleton consists of eight skeleton vectors, each of which has three degree of freedom (DOF) variables for rotational traverse, vertical traverse, and radial traverse. According to the variables, the action skeleton has 24 DOFs. Through the combination of DOFs, all observed momentous body positions are recorded on a real-time basis by the AI system. In the past, because recorded videos were played and observed later, a long time was required and the accuracy was low. The newly developed system has reduced the time to obtain analytical results and increased the objectivity because the AI performs real-time observation and automatic quantification and recording.
Various Types of Mouse Motion Newly Discovered by AI System
After removing the redundancy between the quantified action skeletons, all the observations of mouse motion are reconstructed into a motion unit set. Time-series motions can be precisely analyzed by the time-sequence combination of the motion units. In testing, 2,187 non-redundant pieces of stationary position information were captured in five minutes (18,000 frames). Assuming that the probabilities of appearance of all positions are the same at all moments, the information encoding the mouse motion observable in the system per second (60 frames) is log2(217860)=665.3273 bits. Therefore, the quantity of the motion information during the entire five minutes of recording time is log2(217860*300)=199,600 bits, which is about 0.2 Mega-bits. This means that the motion information for more diverse positions is obtained in comparison with visual observation. This technology may be applied to capture all components of motions that a mouse can show and their probabilities of appearance. Then, the entire mouse motion space and its entropy may be calculated. The present AI system may contribute to an increase of the accuracy of behavior analysis in the study of the brain.
Prof. Kim, Daesoo
2019 KI Annual Report