Home News Industry News "Bat perception" Technology was developed

"Bat perception" Technology was developed

18.05.2021

Scientists develop "bat perception" technology for smartphones that can use sound to generate images

2021-05-17 Source: cnBeta

According to foreign media reports, scientists have found a way to make everyday objects such as smartphones and laptops have the ability to perceive the surrounding environment similar to that of bats. The core of this technology is a complex machine learning algorithm that uses reflected echo to generate images, similar to how bats use echolocation for navigation and hunting.

 

The algorithm measures the time it takes for a sound clip from a speaker or a radio wave pulse from a small antenna to be emitted in the indoor space and return to the sensor. By cleverly analyzing the results, the algorithm can infer the shape, size and layout of a room, as well as pick out objects or people that exist. The result is displayed in the form of a video, which converts the echo data into a three-dimensional vision.

 

A key difference between the team’s achievement and the echolocation of bats is that bats have two ears to help them navigate, and the algorithm is adjusted to work with data collected from a single point, such as a microphone or radio antenna. The researchers say that the technology can be used to generate images from potentially any device equipped with a microphone and speaker or radio antenna.

 

Computer scientists and physicists from the University of Glasgow outlined the research in a paper published in the journal Physical Review Letters on Sunday, which could be applied to the fields of security and healthcare. Dr. Alex Turpin and Dr. Valentin Kapitany from the School of Computing Science and School of Physics and Astronomy at the University of Glasgow are the lead authors of the paper.

 

Dr. Turpin said: "Animal echolocation is an amazing ability. Science has successfully recreated the ability to generate three-dimensional images from reflected echoes in many different ways, such as RADAR and LiDAR."

"The difference between this research and other systems is that, firstly, it only needs data from a single input-a microphone or an antenna-to create a three-dimensional image. Secondly, we believe that the algorithm we develop can combine any data with this The equipment of any one of the two pieces of equipment becomes an echolocation equipment."

 

"This means that the cost of such three-dimensional imaging can be greatly reduced and many new applications can be opened up. For example, by receiving signals reflected by intruders, the safety of buildings can be guaranteed without traditional cameras. The same method can also be used. To track the movements of frail patients in nursing homes. We can even see that the system is used to track the ups and downs of patients’ chests in medical institutions and remind staff to pay attention to their breathing changes."

 

The paper outlines how researchers use laptop speakers and microphones to generate and receive sound waves in the kilohertz range. They also used antennas to do the same for radio frequency sounds in the gigahertz range.

 

In each case, they collected data on a person's reflection of sound waves as they walked around the room. At the same time, they also recorded the room data with a special camera, which used a process called time-of-flight to measure the size of the room and provide a low-resolution image.

 

By combining echo data from a microphone and image data from a time-of-flight camera, the research team "trained" their machine learning algorithm in hundreds of repetitions to associate a specific delay in the echo with the image. In the end, the algorithm has learned to generate its own highly accurate image of the room and its contents only from the echo data, giving it a "bat-like" ability to perceive the surrounding environment.

 

This research builds on the team’s previous work. The team trained a neural network algorithm to build a three-dimensional image by measuring the reflection of flashes using a single-pixel detector.

 

Dr. Turpin added: “We are now able to use light and sound to prove the effectiveness of this algorithmic machine learning technique. This is very exciting. It is clear that there is a lot of potential for perceiving the world in new ways, and we are eager to continue to explore the future. The possibility of generating more high-resolution images."