Israel-based AI healthtech company, DiA Imaging Analysis, which is using deep learning and machine learning to automate analysis of ultrasound scans, has closed a $14 million Series B round of funding.
New investors Alchimia Ventures and Downing Ventures are joining the round. This growth round comes three years since DiA’s last raise. Existing investors include CE Ventures and Connecticut Innovations. Mindset Ventures is also part of this group. It has raised $25M so far.
This latest funding will be used to expand its product line, pursue new partnerships with PACS/Healthcare IT vendors, resellers and distributors and continue its expansion in three regions.
This AI-powered software is sold by the healthtech company to healthcare professionals and clinicians to capture and analyze ultrasound imagery. This process, which can only be done manually, takes human knowledge to interpret scan data. DiA describes its AI technology to be “taking the subjective out of manual and visual estimation processes being done today”.
The company has taught AIs how to analyze ultrasound images so that they can automatically identify key details and abnormalities. It offers a variety of products to meet different clinical needs associated with ultrasound analysis.
The product also leverages ultrasound data for automated measurement of bladder volume.
DiA says its AI software mimics how the human eye sees boundaries and recognizes movement — claiming it is a leap over subjective human analysis and that it brings efficiency and speed.
Hila Goldman, CEO and cofounder of the company, said that “our software tools are supporting instrument for clinicians who need to both acquire the right image and interpret ultrasound data.”
DiA’s AI-based analysis can be found in around 20 countries, including North America and Europe. In China, it says a partner has been approved for the use of the software in their device. The company is currently deploying a strategy to go to market that includes working with channel partners such as GE and Philips who offer the software as an addition on their ultrasound and PACS systems.
According to Goldman-Aslan at the moment, approximately 3,000+ users have access.
Our technology runs on all ultrasound devices and healthcare IT systems. It is also vendor-neutral. You can therefore see that we have over 10 partnerships with both healthcare IT/PACS and device companies. She says that there is not another startup in the space with these capabilities, commercial momentum or many FDA/CE AI solutions. “We have seven FDA/CE approved solutions to treat abdominal and cardiac problems, and we are adding more.”
The data it is trained from will determine how well an AI performs. In the healthcare sector, efficacy is a critical factor. Any bias in the data-set could result in a model that misdiagnoses/underestimates the disease risk of patient groups not represented by the data.
When TechCrunch asked Goldman-Aslan how it trained its AIs to detect key details in ultrasound imaging, he replied that they had access to many thousands of images from medical facilities and therefore could move quickly between one area.
She added that “we collect different population data with various pathologys, as well data from multiple devices.”
“Garbage in Garbage out” is an expression that means, “Garbage in Garbage out”. She also explained that the key to avoiding garbage entering is “Garbage in Garbage out”. Our data is tagged by many doctors and technicians who are each experts and have years of experience.
We also have strong rejection systems that will reject images taken wrongly. We do this to overcome subjectivity in data acquisition.
Notable is the fact that FDA approvals received by DiA were 510(k), Class II approvals. Goldman-Aslan also confirmed that they have not applied for Premarket Approval, and that they do not plan to.
For many medical devices, the 510(k route is used widely to obtain approval. It has been criticised for being a “light-touch” process that does not require the same amount of scrutiny as PMA.
It is important to note that regulations of rapidly-developing AI technology tend to be behind their actual application. This is especially true as these technologies push into healthcare, where they have huge potential but also pose serious risks. There is an obvious gap between what device manufacturers promise and the regulatory oversight they actually receive.
For example, in the European Union the CE scheme, which establishes certain safety, health and environmental standards for devices, can only require manufacturers to declare their conformity. However, some medical devices may require independent evaluation to ensure conformity. It is not a strict regime to regulate the safety of new technologies such as AI.
The EU has begun to implement an extra layer of conformity assessment specifically for AI applications deemed high risk — as part of the forthcoming Artificial Intelligence Act.
The AIA would require additional regulations to cover healthcare use cases, such as DiA’s AI-based ultrasonography analysis. The EU’s co-legislators are currently discussing the proposal. A dedicated regulatory system for AI-related risks is still years away.
Publiated at Tue 24 August 2021 16:16.21 +0000