Alzheimer’s screening, forest-mapping drones, machine learning in space, more – ClearTips

Alzheimer’s screening, forest-mapping drones, machine learning in space, more – TechCrunch

Research papers come Very fast for anyone to read them all, especially in the field of machine learning, which now affects (and produces papers) practically every industry and company. The purpose of this column is to gather the most relevant recent discoveries and papers – specifically but not limited to artificial intelligence – and explain why they matter.

This week, a startup that is using UAV drones to map forests, how machine learning can map social media networks and predict Alzheimer’s, recent technological advances about space-based sensors and others Improving computer vision for news.

Predicting Alzheimer’s through speech patterns

Machine learning tools are being used to aid diagnosis in many ways, as they are sensitive to patterns that humans are difficult to detect. IBM researchers have found patterns in speech potentially predictive of the speaker’s Alzheimer’s disease.

The system requires only two minutes of simple speech in a clinical setting. The team used a large set of data dating back to 1948 (Framingham Heart Study) to identify patterns of speech among those who would later develop Alzheimer’s. The rate of accuracy is about 71% of you or about 0.74 more statistically informed. This is far from a sure thing, but the current basic tests are barely better than a coin flip at predicting the disease at the moment.

This is very important because the earlier Alzheimer’s can be detected, the better it can be managed. There is no cure, but there are promising treatments and exercises that can reduce or alleviate the worst symptoms. A non-invasive, rapid test of such people can be a powerful new screening tool and is, of course, also an excellent demonstration of the utility of this field of technology.

(Don’t read the paper hoping to find the exact symptom or anything like that – the array of speech features is not really the kind of thing you might see in everyday life.)

So-Cell Network

Ensuring that your intensive learning network generalizes to data outside its training environment is an important part of any serious ML research. But an attempt to set some models loose on data that is completely foreign to it. Maybe they need it!

Researchers at Uppsala University in Sweden took a model used to identify groups and connections in social media, and applied it (not scanned, of course) to tissue scans. Tissue was treated so that the resulting images produced thousands of small dots representing mRNA.

Typically representing different groups, types and areas of tissue, they would need to be manually identified and labeled. But graph neural networks built to identify social groups based on similarities such as similar interests in virtual space proved that it could perform a similar function for cells. (See image at top.)

“We are using the latest AI methods – specifically, graph neural networks, developed to analyze social networks – and to adopt them to understand biological patterns and gradual variation in tissue samples. Cells are comparable to social groups that can be defined according to the activities they share in their social networks, ”said Carolina Wahlby of Uppsala.

This is not only an interesting depiction of the resilience of neural networks, but structures and architectures repeat on all scales and in all contexts. As without, if you.

Drone in nature

There are countless trees in the vast forests of our national parks and wood fields, but you cannot put “countless” in paperwork. One has to make a real estimate of how well different regions are developing, the density and type of trees, diseases or forest fires, and so on. This process is only partially automated, as aerial photography and scans reveal only this much, while on-the-ground observation is detailed but extremely slow and limited.

TreeSwift aims to take a middle path by equipping the drones with sensors that require them to navigate and measure the forest correctly. By flying much faster than a walking person, they can count trees, look at problems and generally collect a ton of useful data. After dropping out of the University of Pennsylvania and receiving a SBIR grant from NSF, the company is still in a very early stage.

“Companies are seeking more and more forest resources to combat climate change, but you don’t have the supply of people who are growing to meet that need,” Steven Chen, co-owner of TreeWift Founder and computer and doctoral student in informatics (CIS) at Penn Engineering, a Penn said in the news. “I want to help every forester do what they do more efficiently. These robots will not replace human jobs. Instead, they are providing new tools to those who have the insight and passion to manage our forests. “

Another area where drones are taking a lot of interesting steps is underwater. The autonomous submarines of the sea are helping to map the ocean floor, track ice shelves and follow whales. But they all have a bit of an Achilles heel that they need to periodically pick up, charge and recover their data.

Purdue engineering professor Nina Mahmoodian has created a docking system by which submersibles can easily and automatically connect to power and data exchange.

A yellow marine robot (left, underwater) finds its way to a mobile docking station to recharge and upload data before continuing a task. (Purdue University photo / Jared Pike)

The craft requires a special nosecone, which can locate and plug a station that establishes a secure connection. The station itself may be an autonomous watercraft, or a permanent feature elsewhere – what matters is that the small craft can close the pit for recharge and debut before proceeding. If it is lost (a real threat at sea), its data will not be lost with it.

You can see the setup in action below:

Sound in theory

Drones may soon become fixtures of city life, although we are a few ways from automated private helicopters that some think are just around the corner. But living under a drone highway means constant noise – so people are always looking for a way to reduce turbulence and resulting sound from the wings and propellers.

Computer model of an aircraft around which there is simulated turbulence.

It looks like it is on fire, but it is turbulence.

Researchers at King Abdullah University of Science and Technology discovered a new, more efficient way to simulate airflow under these conditions; The dynamics of fluids is essentially complex as you build it, so the trick is to apply your computing power to the right parts of the problem. They were able to present only the flow near the surface of the theoretical plane in high resolution, finding a certain distance with little point of knowing exactly what was happening. Improvement in the model of reality does not always need to be better in every way – after all, what matters is the outcome.

Machine learning in space

Computer vision algorithms have come a long way, and as their efficiency improves, they are beginning to be deployed sideways rather than in data centers. In fact it has become quite common for camera-bearing objects such as phones and IoT devices to do some local ML work on the image. But this is another story in space.

Image Credit: Cosine

ML work in space was until recently too expensive to consider too power-wise. This is the power that can be used to capture another image, transmit data to the surface, etc. Hyperscout 2 is exploring the possibility of ML working in space, and its satellite has created computer vision techniques. Is immediately applied to images that are collected before sending them. bottom. (“Here is a cloud – here is Portugal – here is a volcano …”)

There is little practical benefit for now, but objects can easily be combined with other functions to create new use cases, saving power when no objects of interest exist, allowing metadata to be used in other devices Passing that can work better when reported.

Out with the old, out with the new

Machine learning models are very good at making educated guesses, and in subjects where there is a large backlog of unsold or poorly documented data, it can be very useful to make AI first pass so that graduate students use their time more productively Can. The Library of Congress is doing with old newspapers, and now the libraries of Carnegie Mellon University are happening in spirit.

CMU’s million-item photo collection is in the process of being digitized, but it needs to be organized and tagged to make it useful to historians and curious browsers – hence computer vision algorithms to group similar images Identifying objects and locations being placed. And other valuable basic catalog work.

Matt Lincoln of CMU stated, “Even a partially successful project will greatly improve archive metadata, and may provide a potential solution for metadata generation if archives ever finance the digitalization of the entire archive.” Was nurtured “.

A very different project, yet one that seems connected in some way, is a work done by a student of Escola Politécnica da Universidade de Pernambuco in Brazil, who has the bright idea of ​​sprouting some old maps with machine learning. Was.

The device they use takes old line-drawing maps and attempts to create a kind of satellite image based on them using a generative advanced network; GANS essentially tries to make themselves content that they cannot distinguish from the real thing.

Image Credit: Escola Politecica da Universidade de Pernambuco

Okay, the results are not what you can call completely solid, but it is still promising. Such maps are rarely accurate, but this does not mean that they are completely abstract – it is a fun idea to recreate them in the context of modern mapping techniques that help make these locations seem less distant. Can.

Be the first to comment

Leave a Reply

Your email address will not be published.