Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
No datasets were generated or analysed during the current study.
Artificial intelligence (AI) is revolutionizing nearly every aspect of modern life. In the medical field, robotic surgery is the sector with some of the most innovative and impactful advancements. In this narrative review, we outline recent contributions of AI to the field of robotic surgery with a particular focus on intraoperative enhancement. AI modeling is allowing surgeons to have advanced intraoperative metrics such as force and tactile measurements, enhanced detection of positive surgical margins, and even allowing for the complete automation of certain steps in surgical procedures. AI is also Query revolutionizing the field of surgical education. AI modeling applied to intraoperative surgical video feeds and instrument kinematics data is allowing for the generation of automated skills assessments. AI also shows promise for the generation and delivery of highly specialized intraoperative surgical feedback for training surgeons. Although the adoption and integration of AI show promise in robotic surgery, it raises important, complex ethical questions. Frameworks for thinking through ethical dilemmas raised by AI are outlined in this review. AI enhancements in robotic surgery is some of the most groundbreaking research happening today, and the studies outlined in this review represent some of the most exciting innovations in recent years.
Keywords: artificial intelligence, robotic surgery, robot-assisted surgery, intraoperative enhancement, clinical improvement, ethical considerations of AI
Since its inception and widespread adoption, artificial intelligence (AI) has revolutionized nearly every aspect of human life. AI is the study and development of algorithms that give machines the ability to reason and perform cognitive functions such as problem-solving and decision-making [1]. From finance to agriculture, manufacturing to education, AI has fundamentally altered our ability to understand and respond to complex problems. Perhaps the most impactful adoption of AI on human life is in the field of medicine where AI is being used to help physicians make more precise decisions and predict patient outcomes with a higher degree of certainty. Within the medical field, surgery has experienced one of the largest impacts with the adoption of AI as more and more surgeries are performed using robotic assistance. Current surgical robots are controlled by a “master–slave” dynamic where the robot itself does not have any autonomy if it does not have a human operator. However, recent advances in AI and machine learning (ML) seek to expand the capabilities of surgical robots and augment the surgical experience in the operating room. Surgical robots rely on data captured through sensors and images to operate, and this plethora of data capture is the key driver behind AI innovations in robotic surgery [1].
In this review, we focus on the recent advancements that AI brings to the world of robotic surgery with a particular emphasis on intraoperative applications. We also outline important ethical considerations for the incorporation of AI into robotic operations. Broadly, intraoperative enhancements provided by AI can be classified into two categories: robotic autonomy and surgical assessment/feedback. Advances in each of these categories are focused on creating environments for safe, data-informed surgical decision-making and enhancing surgical education (Fig. 1 ). Robotic surgery’s continued integration of AI will improve patient outcomes and make surgery safer in the years to come.
Data inputs and outputs for the development of artificial intelligence/machine learning applications for the improvement of robotic surgery
A literature search for this narrative review was completed using the PubMed database. Literature was limited to 2 years (November 2021–November 2023) to select papers representing the most recent advancements in the field. The search terms were as follows: (((((Artificial Intelligence) OR (Machine Learning)) AND (Robotic)) OR (Robot-Assisted)) AND (Surgery)) AND (Autonomy); (((((Artificial Intelligence) OR (Machine Learning)) AND (Robotic)) OR (Robot-Assisted)) AND (Surgery)) AND (Skill Assessment); and (((((Artificial Intelligence) OR (Machine Learning)) AND (Robotic)) OR (Robot-Assisted)) AND (Surgery)) AND (Feedback). For the context of this review, “robot” refers to a device that is assistive in the operating room and “artificial intelligence/machine learning” refers to the development of algorithms that give machines decision-making capacity [2]. The study selection process is outlined in the PRISMA flow diagram (Fig. 2 ). A total of 553 unique records were identified. Of these, 463 full-text articles were assessed for eligibility and 45 were selected as the most recent advancements in the field and are included in this narrative review.
PRISMA flow diagram for literature selection. PRISMA preferred reporting items for systematic reviews and meta-analyses
The most widely used robotic surgical system today is the da Vinci (Intuitive Surgical, Sunnyvale, CA) which employs a “master–slave” relationship where a human surgeon performs all surgical gestures in a console setting. These gestures are then transmitted to the robotic arms docked in the patient surgical site. In other words, the da Vinci robot cannot act autonomously without the input of a human operator. Autonomy, however, is not a binary. Instead, autonomy can be thought of as a range from not autonomous all the way to completely autonomous, and the definitions of the levels of robotic surgical autonomy are laid out in Table 1 . [3, 4] This section will highlight important advances in intraoperative robotic autonomy and assistance which are helping surgeons perform surgical tasks. This review explores innovations across all surgical subspecialties utilizing surgical robots.
Levels of automation in robotic surgery. [3, 4]
Adapted from Panesar et al. and Attanasio et al.
Human vs. Autonomous Robotic Surgeon | ||
---|---|---|
Level 0 No Automation Traditional surgery: Human performs all surgical tasks; includes open, laparoscopic, and “master–slave” robotic surgical methods | Level 1 Some Assistance e.g. Intraoperative image guidance, augmented reality rendering; human surgeon still performs all surgical tasks | Level 2 Partial Automation Reduced required level of human input, but human surgeon still performs majority of surgical tasks |
Level 3 Conditional Automation e.g. Automated bone drilling to prespecified depth with human setup; robot can perform certain procedural steps without human input | Level 4 High Automation e.g. Automated skin closure or lymph node dissection; Robot capable of performing most, if not all parts of a complex procedure with minimal human input | Level 5 Complete Automation e.g. Surgical robot that could perform an urgent cholecystectomy during a space flight; Robot is making all surgical decisions with no human input |
Robotic surgery allows for operation in deep anatomical spaces (e.g. abdominal and pelvic cavities, synovial joints) using small incisions for cameras and instruments. Real-time AI image enhancement (autonomy level 1) allows for enhanced identification of anatomical structures and instruments. Intraoperative visual environments are constantly changing as dissection or repair tasks progress which can lead to marked changes in intraoperative image quality. To combat these changes, Ali et al. have developed an online preprocessing framework capable of denoising, deblurring, and color-correcting real-time camera imaging to enhance intraoperative visualization in knee arthroscopy. Their method outperformed existing image enhancement with significantly reduced computation time to image display [5]. Robotic surgeons also often utilize electrocautery devices for dissection and ligation tasks, but this process generates smoke which remains trapped in the anatomical space of interest which can temporarily obscure the visual field until suction removal. Wang et al. have proposed a convolutional neural network (CNN) coupled with a Swim transformer that is capable of removing smoke from intraoperative surgical footage, ultimately producing an enhanced, smoke-free surgical view [6]. This is crucial when a surgeon needs to move quickly with good visualization such as during an acute intraoperative hemorrhage.
Beyond enhancing the surgeon’s intraoperative view, AI is also being used to provide intraoperative information on native tissue. Surgery often involves identifying “surgical planes” which are anatomical points at which tissues meet that are free of critical structures such as arteries, large veins, or nerves and are, therefore, safe for dissection. Kumazu et al. have developed a deep learning model using surgical video from robot-assisted gastrectomy capable of automatically segmenting loose connective tissue fibers to define a safe dissection plane (Fig. 2 ). Expert surgeons gave the model a mean sensitivity score of 3.52/4.00, indicating good model performance for safe plane identification. [7]
Another area of interest where AI promises advancement is surgical oncology, particularly in the realm of intraoperative positive margin minimization to prevent cancer recurrence. In the field of oral and oropharyngeal surgery, Marsden et al. presented a variety of AI models that utilize fiber-based fluorescence lifetime imaging to guide intraoperative dissection tasks. Model features allowed researchers to generate and overlay a heatmap of probable cancer location (ROC-AUC: 0.88) within the oral cavity to guide surgeons during cancer excision (Fig. 2 ) [8]. A second innovation in the field of neurosurgery utilizes data derived from an ultrasonic aspirator, a device commonly used to remove brain tumors. Bockelmann et al. generated AI models trained on brain tissue signal feedback from an ultrasonic aspirator. Models were able to distinguish signal differences between native tissue and brain tumor, achieving a mean F1 of 0.900 using a neural network approach. Intraoperative deployment of these tissue models can help surgeons to resect malignancies while preserving as much healthy neural tissue as possible [9]. A final example of surgical margin management was presented by Bianchi et al. in which they used preoperative multiparametric magnetic resonance imaging (mpMRI) to guide intraoperative frozen tissue sampling for margin detection during robot-assisted radical prostatectomy. An augmented reality 3D (AR3D) model was generated and projected onto the surgical field in the robotic console which identified the best location to take a frozen tissue specimen which, in theory, was free of cancer cells. Positive surgical margins were significantly lower at the level of the identified index lesion as compared to the standard non-AR3D approach (p = 0.01). [10]
A surgical field is comprised of two major elements which are the native tissue (e.g. anatomical structures, malignancies) and non-native devices (e.g. surgical instruments, clips, sutures), and one of the greatest challenges in the implementation of AI in surgery is distinguishing between the two. Accurate delineation is critically important for the development of augmented reality ([AR] e.g., surgical field images with computer-generated overlay) surgical field enhancements. De Backer et al. presented a model consisting of deep learning networks to delineate instruments during robot-assisted kidney transplantation which achieved a Dice score of 97.10%. This is a marked improvement from the current standard in AR-guided robotic surgery with minimal disruption of 3D overlays highlighting key anatomical structures [11]. Ping et al. [12] presented a similar innovation for instrument detection for surgical endoscopy using a modified CNN and You Only Look Once v3 algorithm with sensitivity measurements of 93.02% and 87.05% for surgical instrument and tooltip detection, respectively [12].
One of the major differences between open and robot-assisted surgery is tactile sensation. Open approaches allow surgeons to palpate anatomical structures or feel changes in tissue resistance during dissection and suturing tasks, something that has not yet been developed for robotic surgery. The newest generation of surgical robots can display force measurements in the surgical console, but surgeons often struggle to understand how this value translates to intraoperative tissue force. To combat this, Miller et al. developed a study that provided surgeons with haptic (vibratory) feedback during an exercise in which surgeons were asked to draw a circle on a piece of paper with a surgical robot using as little force as possible. They found that the addition of haptic feedback reduced the median maximum exerted force from 6.43 N to 3.57 N (p < 0.001). In another paper, Doria et al. sought to apply haptic feedback to the palpation of anatomical structures. They first developed stiffness models to characterize the mechanical properties of intrauterine leiomyomas; they then adapted the stiffness models to deliver haptic feedback through a wearable fabric haptic device such that a greater vibration indicated stiffer tissue [13]. Force is also generated intraoperatively when retracting tissue, and excessive force can lead to preventable adverse events such as tissue tears or hemorrhage. Zhu et al. developed a robot for transoral surgery that uses piezoelectric sensors embedded in the instrument tips to detect forces on tissue up to 15 N. Sensed forces can then be displayed to the surgeon in the console, and automated warnings to surgeons can help prevent exceeding safe retraction forces (Fig. 3 ) [14].