Mardava Gubbi, a doctoral candidate in the Department of Electrical and Computer Engineering at Johns Hopkins University, is working to enhance the traditional photoacoustic imaging process. A modality related to ultrasound imaging, photoacoustic imaging entails inserting a light source into a hollow-core needle, illuminating the patient's tissues, which expand and create pressure waves that are received by an ultrasound probe. These pressure waves are then converted into interpretable images using a process called beamforming, according to a Johns Hopkins statement detailing Gubbi's research.
"Unfortunately, images produced through beamforming can often be unclear and riddled with 'artifacts,' phantom images that make it difficult for humans to interpret what is actually there," according to the statement.
Gubbi is attempting to improve the traditional photoacoustic imaging process by bypassing beamforming, with an approach that involves integrating a photoacoustic imaging system, a deep learning-based tracking system and a robotic arm to track needle tips in different imaging environments during surgery.
Deep learning refers to "a family of machine learning algorithms capable of extracting information from raw inputs such as images. With Gubbi's work, the visual component consists of a photoacoustic imaging system providing raw sensor data to the deep learning system, which then extracts the position of the needle tip in the raw sensor data frame," according to Johns Hopkins. "The coordinates of the needle tip position are then provided to the robotic control system, which gives commands to move the robot to track the needle tip."
Gubbi noted that this research could potentially improve surgical procedures such as biopsies and catheter insertions by automating the task of tracking needle and catheter tips and providing doctors with information regarding the surrounding tissue. He believes that it could also cut down on the risk of surgical complications such as bleeding, accidental injury to nearby critical organs, and sepsis related to these procedures (2021 IEEE International Conference on Robotics and Automation 2021; doi: 10.1109/ICRA48506.2021.9561369).
"This research is notable for two reasons," Gubbi said in a statement. "First, the idea of using photoacoustic images as inputs to a robotic visual serving system-which refers to using information extracted from images to control the motion of robotic systems-is new and has multiple advantages over the traditional method of ultrasound imaging."
Second, "using the outputs of a deep learning-based system as inputs to a robotic visual servoing process allows for improved tracking of needle tips compared to creating a human-interpretable image and extracting the needle tip position from that image," said Gubbi.
Obese patients in particular stand to benefit, he added, as photoacoustic imaging "is better suited to track needle tips in those patients compared to traditional ultrasound imaging, which often results in 'noisier' images in patients with larger body sizes."
While the traditional photoacoustic imaging process reconstructs interpretable images from raw photoacoustic sensor data using a beamforming algorithm, "this approach does not consider all factors and variations contributing to the wave propagation process in human patients," added Muyinatu Bell, PhD, a John C. Malone Assistant Professor who devised the original idea as PULSE Lab Director at Johns Hopkins University, and head of the research lab in which Gubbi is working.
As such, artifacts can be present in photoacoustic images and the ability to "distinguish true signals from these artifacts" is diminished, said Bell. By relying on state-of-the-art computer vision techniques, such as deep learning, "we bypass beamforming altogether," she said.
After training neural networks based on simulated data and replicating thousands of possible variations in the physics of wave propagation, the resulting raw sensor data are used as training examples to learn unique relationships, such as the shape of a waveform relative to its depth in the image," Bell explained.
"These relationships are then used to locate the origin of a photoacoustic signal in real data. This process is significantly different from a traditional approach that relies on flawed mathematical models to convert the raw patient data to an artifact-prone image," she continued.
"Instead, we use our process to create high-contrast, high-resolution, artifact-free images. In some cases, images are not necessarily required, as the coordinate locations of photoacoustic signals are one direct output of our deep learning approach, and this output coincides with the primary information extracted from images when interfacing photoacoustic imaging with our novel robotic systems for image guidance."
Looking forward, this research could potentially improve procedures such as biopsies and catheter insertions and impact patient care in various ways, Bell concluded.
"Automating the task of tracking needle tips, catheter tips, and other surgical tool tips has the potential to offer a hands-free approach to finding and following the needle, catheter, or tool tip, enabling members of the radiology team to instead focus on performing more specialized tasks that require greater skill," she said.
"Our approach also has the benefits of visualizing needle, catheter or tool tips more clearly, not missing important targets, mitigating the risks of excessive bleeding and minimizing significant radiation exposure to children as well as operators who image multiple patients per day and per year. This research also has the potential to reduce errors during surgical procedures, thereby reducing infection rates, shortening hospital stays, and improving postoperative function in patients."
Mark McGraw is a contributing writer.