Abstract
The availability of computational hardware and developments in (medical) machine learning (MML) increases medical mixed realities’ (MMR) clinical usability. Medical instruments have played a vital role in surgery for ages. To further accelerate the implementation of MML and MMR, three-dimensional (3D) datasets of instruments should be publicly available. The proposed data collection consists of 103, 3D-scanned medical instruments from the clinical routine, scanned with structured light scanners. The collection consists, for example, of instruments, like retractors, forceps, and clamps. The collection can be augmented by generating likewise models using 3D software, resulting in an inflated dataset for analysis. The collection can be used for general instrument detection and tracking in operating room settings, or a freeform marker-less instrument registration for tool tracking in augmented reality. Furthermore, for medical simulation or training scenarios in virtual reality and medical diminishing reality in mixed reality. We hope to ease research in the field of MMR and MML, but also to motivate the release of a wider variety of needed surgical instrument datasets.
Background and Summary
Instruments for surgical procedures have been found in archaeological excavations from the bronze age and as paintings on stone, or paintings on walls of tombs and on papyrus1. The various instruments such as forceps, hooks or retractors have undergone a noticeable evolution, especially since the 18th century2. Due to the continuing specialisation in surgery over the last decades, a wide variety of specialised surgical instruments have been developed1. However, there is still a common pool of instruments such as scissors, scalpels, retractors, forceps and the like2. As an interventional discipline like surgery, dentistry has a distinct set of special instruments for the treatment of dental diseases3,4,5,6. Furthermore, there are now iso norms in the property and nature of surgical instruments to meet the special requirements in the human situs (ISO 7151:1988, ISO 7153-1:2016, ISO 7740:1985, ISO 7741:1986, ISO 13402:1995, ISO 6360-3:2005).
Recent developments in deep learning enable advanced computer-assisted surgery systems, by detecting and differentiating surgical tools, tracking their movements, and providing feedback to the surgeon7. However, surgical data science relies on large-scale data sets of tools and their movement. Current datasets contain images and videos of surgical scenes, including bounding boxes, labels or segmentation masks8,9. This limits their utilisation to very specific applications10,11.
Synthetic datasets have become a necessity in the computer vision domain12,13. Furthermore, synthetic data has shown to be a potentially valuable addition to training machine learning models14. As the generalisation ability of deep learning models increases, synthetic datasets are becoming an enticing alternative or addition for training them13,14,15,16,17,18,19.
In this context, a dataset of three-dimensional (3D) medical instruments can be used to create realistic surgical scenes, applicable for training deep algorithms for instrument detection, segmentation or marker-less 3D instrument registration and tracking20. Aside from surgical data science, 3D models of medical instruments have applications in medical simulation or training scenarios in virtual reality21 and medical mixed reality22,23. They can aid in simulating and planning the instrument path in surgeries virtually. Virtual surgery planning already occurs frequently in oral and maxillofacial surgery24,25,26.
Therefore, we present a collection of 3D models of a wide variety of instruments from surgery and dentistry. In contrast to existing datasets, our collection can be used to generate and render almost unlimited realistic scenes, both 2- and 3- dimensional.
In this paper, we describe our unique data collection, which contains scanned 3D models of 103 medical instruments. The instruments within our collection are mainly related to dentistry and oral and maxillofacial surgery but are not limited to them.
The 3D meshes of the surgical instruments were virtually replicated by scanning instruments with structured light scanners and by subsequent post-processing performed in the proprietary scanning software. All models were visually inspected, analysed and classified into groups such as retractors, forceps and clamps.
We also demonstrate a method to increase the total number of models in the collection. The approach effectively creates new variations of the original model by semi-manually adjusting, scaling and smoothing the original. This can be seen as a form of data augmentation for generating massive datasets, which can be beneficial for deep learning13,27,28,29.
Methods
The data collection consists of four key steps: instrument preparation for scanning purposes, 3D scanning using structured light scanners, post-processing in proprietary software, analysis of the generated models, and model adaptation to create a variety of models based on the originals. Figure 1 provides an overview of our pipeline, which we will now describe in detail.
103 surgical instruments were used to create the dataset. These instruments were kindly provided by the Department of Oral and Maxillofacial Surgery of the University Hospital Rheinische-Westfälische Technische Hochschule (RWTH) Aachen. The study was conducted at the Institute of Artificial Intelligence in Medicine (IKIM) of the Essen University Hospital (AöR). For scanning, the instruments were prepared with AESUB white (Scanningspray Vertriebs GMBH, Recklinghausen, North Rhine-Westphalia, Germany).
Two structured light 3D scanners, the Autoscan Inspec (Shining 3D Corporation, Hangzhou, Zhejiang, China) and the Artec Leo (Artec3D, Senningerberg, Canton Luxembourg, Luxembourg), were used for obtaining the 3D models. Post-processing was done with their respective commercially available proprietary software UltraScan version 2.0.0.7 and Artec Studio 17 Professional version 1.0.141. All models were exported from the proprietary software as Stereolithography (STL) files (Stereolithography Interface Specification, June 1988, 3D Systems Inc., Valencia, California, United States).
Additional visual inspection was done in Microsoft 3D Viewer (Microsoft Corporation, Redmond, Washington, United States) and Blender 3.4.1 (The Blender Foundation, Amsterdam, Noord-Holland, Netherlands https://blender.org). A Blender add-on based on Python was implemented, to alter original models and, thus, enabling the creation of a plethora of likewise models based on the original scanned instrument. Another method of creating likewise models can be fully automatically achieved using a Python 3.9 (The Python Software Foundation, Wilmington, Delaware, United States https://python.org) script. It allows scaling the instruments along the local axes of the original model, or the application of different types of smoothing (Taubin, Laplacian and Humphrey algorithms).
All processing scripts and the Blender add-on are included in the data repository30. The original STL models were analysed according to their geometric properties, such as length, width, height and volume. These descriptors, together with all created STL files are available in the data repository30. The STL models will also be made available on MedShapeNet (https://medshapenet.ikim.nrw)31. Figure 1 shows an overview of all steps involved in producing the data collection. Out of the 103 surgical instruments, 49 underwent scanning with the Artec Leo and 55 with the Autoscan Inspec. One instrument was scanned using both scanners. 11 instruments were scanned in alternative configurations, such as open and closed stances. Eight instruments were scanned in an open stance using the Artec Leo, and three instruments were scanned with the Autoscan Inspec, totalling four supplementary scans. To account for diverse post-processing options, we generated two STL files per scan, representing different settings. In Chapter “Data acquisition and post-processing with Artec Leo and Artec Studio
Professional 17” and “Data acquisition with Autoscan Inspec and UltraScan”, the differences between these two settings are explained. To confirm reproducibility one instrument was scanned and processed by two different people using the Autoscan Inspec resulting in an additional scan. Thus, a comprehensive sum of 114 STL files resulted from scans using the Artec Leo (49 + 8)*2 = 114. Similarly, scans performed using the Autoscan Inspec resulted in 120 STL files, (55 + 4 + 1)*2 = 120. For details on which instruments were scanned multiple times, we refer to the ‘Overview’ Word file within the data repository30.
Instrument preparation
The instruments used in the department of oral and maxillofacial surgery were divided into 27 classes, see Fig. 2. The instruments can be used for a plethora of interventions and actions. For example, retractors provide a clear view, hammers and chisels apply controlled force, clamps hold blood vessels and tissues, forceps grasp and manipulate these tissues, and dental probes examine teeth and gums.
Nearly all instruments are made of stainless steel, due to its durability and resistance to corrosion. The instruments are also smoothly polished. The reflective nature of polished stainless steel causes scatter and limits the accuracy of point cloud data obtained with structured light 3D scanners. Some instruments further have black handles, which absorb the light emitted by the scanner, thus limiting the obtained data severely.
Therefore, to enable appropriate scanned mesh quality, all instruments were prepared with 3D scanning spray. AESUB white, one of the most used 3D scanning sprays, is easily washable but does not evaporate, and has a layer thickness of approximately 0.007 millimetres. These properties are why we deem this spray suitable.
The Artec Leo has a 3D point accuracy of up to 0.1 millimetres and a resolution of up to 0.5 millimetres, a couple of spray layers with AESUB white should not negatively influence the scan resemblance of the real instrument. The Autoscan Inspec however has an Accuracy of 0.01 millimetre and spraying was therefore kept to a necessary minimum.
There are sprays on the market that enable the scanning of an object’s colours or provide a lower layer thickness than AESUB white. For instance, AESUB is transparent, orange, and yellow. However, AESUB orange offers only a slight reduction of a few micrometres in layer thickness, thus still influencing the scan outcome. AESUB yellow necessitates the use of a spray gun and additional expertise in spraying. In our experience AESUB transparent does not substantially decrease the surgical instrument’s reflectivity to allow it to be accurately scanned with the Artec Leo. We concluded that AESUB white is the most suitable choice for scanning our instruments.
This is especially true since Instruments scanned with the Artec Leo required evenly and fully covering layers of spray. Although the Autoscan Inspec was more adept at handling the reflective, shiny and absorbing properties of our instruments, in our setup, all stainless-steel instruments or parts required a single layer of spray for appropriate scans. In accordance with the manufacturer’s instructions, the scanning spray was always applied by shaking it prior to usage and spraying it at a distance of 15–20 centimetres while slowly and steadily moving around the instrument. An example of the results obtained before and after spraying is shown in Fig. 3. The obvious downside of this method is that the original surface texture cannot be captured, therefore we feel the universal STL format is appropriate for sharing the models created in this study.
Scanners and post-processing
A desktop computer with an AMD Ryzen 9–5900 × 12 Core Processor and 3.200 hertz DDR4 RAM along with an NVIDIA GeForce RTX 3090 graphics card was used for post-processing, analysing and augmenting the data collection. The 3D point cloud data was obtained using Artec Leo and Autoscan Inspec Structured light scanners. The corresponding software Artec Studio Professional 17 and UltraScan version 2.0.0.7 were used for post-processing and model generation.
Data acquisition and post-processing with Artec Leo and Artec Studio Professional 17
Artec Leo is a handheld 3D scanner. It utilises a white 12 Light-emitting diode (LED) array light source, with an optimal working distance between 0.35–1.2 metres. An accuracy of 0.2 mm + 0.3 mm/m should be obtainable according to the manufacturer.
As a trade-off between accuracy and the desire to have the whole instrument within our field of view during scanning, 0.5 metre scanning distance is chosen with a recommended exposure time of one millisecond. To guarantee this distance, the scanner was set to show the distance colour map superimposed on the scanned object while scanning. To minimise error, the scanner was set to only scan the object if tracking was maintained. In this context, tracking refers to the automatic estimation of the relative frame position performed by the scanner during recording. Tracking is based on common surface and texture features. Therefore, a simple background rich in texture features was chosen. Recording was performed with 60 high-definition frames per second. Artec Leo allows recording additional texture frames, the combined setting was used to record a total of 65 frames per second. Scanning was done by slowly and smoothly encircling the stationary instrument, changing the angle of the scanner in relation to the instrument throughout all positions in the circle. The actual scanning rate varied due to Artec Leo’s feature of recording only when tracking was accurate. Each scan recorded approximately 800–1200 frames which took less than a minute to acquire.
Each instrument was scanned in two orientations to ensure a complete compassing capture of the instrument for later post-processing. Both scans of the instrument were imported in Artec 17 professional with data density factor 8 for its artificial intelligence-powered enhanced reconstruction. This is a feature from the manufacturer to increase the data points and reduce the noise within the scan. These are also the maximum recommended settings for our hardware setup.
Figure 4 gives an overview of the steps for post-processing objects scanned with the Artec Leo and its proprietary software. After importing the recording and HD reconstruction, the global registration feature was used, and a region of interest was cropped out with the editor tool. The global registration converts all surfaces created from single frames to a single coordinate system. To do so, the software selects geometry and texture points within a frame and matches these points to the other frames. The software then tries to minimize the mean differences between these points. Unfortunately, the exact algorithm is not publicly disclosed by the manufacturer. Frames with an error distance greater than 0.3 mm were not used to generate surface models. To ensure accurate registration, a key frame ratio of 0.5 is used for geometry and texture-based registration, which looks for features within areas of five squared millimetres. Key-frame ratio determines the percentage of frames the software utilises for the registration on a scale of zero to one.
Semi-manual alignment was then performed, Fig. 4b, followed by an additional global registration to compensate for potential earlier mismatches based on features that are now deleted. The background, noise and artefacts were manually removed from both scans using the editor tool. Visual inspection was conducted, and if necessary, additional semi-manual alignment was performed. This was needed in cases where due to a lack of features on the surgical instrument, in combination with its symmetrical shape, the algorithm of the Artec Studio software prioritized background features over instrument features. Subsequently, outlier removal was performed with a 3D-noise level set to three, and accuracy set to 0.2 millimetres, which is the maximum 3D resolution recommended by the manufacturer. The suggested 3D resolution was derived from the scanner’s maximum 3D resolution of 0.2 millimetres. Outlier removal was performed by calculating the mean distance and standard deviation between neighbouring surface points. Surface points with a mean and standard deviation greater than an interval defined by the mean and standard deviation of all neighbourhood points were then classified as outliers and removed from the scene automatically. The 3D-noise level, which multiplies the standard deviation of neighbourhood points, controls outlier assignment. A higher 3D-noise level reduces identified outliers. Notably, our earlier manual point removal stage contributes to decreased noise on our surfaces. However, the specific process details remain undisclosed by the manufacturer, Artec3D.
Once both scans of the instrument were aligned and the background and noise were removed, fusion was applied to generate a watertight fusion mesh. A sharp and smooth fusion was performed, resulting in two STL models per scan. Sharp fusion contains a higher level of detail and achieves the maximum 3D resolution of 0.2 millimetres according to the manufacturer. The downside of this method is that potential noise left in the model after steps a-e from Fig. 4 may be intensified. The smooth fusion results in smoother models and despite the target resolution of 0.2 millimetres, the software may remove points to reach a maximum mean point distance of 0.6 millimetres. Additionally, models are automatically smoothed, and since surgical instruments are generally smooth, this might result in an aesthetically more appealing model. Unfortunately, the exact algorithms are not given by the manufacturer.
Fusion was performed with a resolution of 0.2 millimetres, ultra-HD sensitivity, and excluding frames above the maximum error threshold of 0.3 millimetres. In rare cases, the smooth fusion model was manually edited. These cases were the self-retaining retractors and speculums, where fine details and structures were not separated appropriately by the fusion process. In summary, while the sharp fusion models are always presented as is, the smooth fusion models are processed with the in-built smoothing function from Artec Studio, and manual editing in case of small structures which were over-smoothed by the software.
The models were inspected using the Artec Software and inside a rendering environment such as Blender. Eight instruments with movable parts, which may be present in different states in a surgical scene, were selected and scanned in different configurations. For example, some of the mouth gags were scanned with an open and closed state of the instrument. The final STL files are available in the data repository30.
Data acquisition with autoscan inspec and ultrascan
Autoscan Inspec is a high-end industrial table desktop scanner ideal for reverse engineering of small parts and has become well known for its usage in dental applications. It utilises a blue light source and two five-megapixel grey-scale cameras. According to the manufacturer, its accuracy is 0.008 millimetre, resulting in an overall 3D resolution of 0.05 millimetres. Its max scanning area is 100 × 100 × 75 millimetres, although scanning the object in multiple orientations does allow for scanning larger objects.
The instrument was attached to the scanner’s robotic table at the end of its arm. The table can rotate 360 degrees and the arm itself can rotate from 0 to 135 degrees, with 50 being level with the desktop table. The scanner has a default path with 10 pre-settings of the table and arms rotational position, leading to 10 frames making up a single scan. If desired, the user can manually change these positions and the number of frames. There’s an “add scan” option, which allows users to scan extra frames in a desired position and add them to the earlier scan sequel. Only for a few thin, lengthy objects, a manual scan path was used. The default scan path, using a few additional scans upon visual inspection, was deemed sufficient for the remaining instruments. The scan was edited by removing unwanted data points, then, the instrument was rotated 180 degrees and scanned again. This process is called a flip scan. The two scans were either automatically aligned using the UltraScans proprietary alignment method, or semi-manually aligned if automatic alignment was not possible.
Automatic alignment identifies identical data points in both initial and flip scans from their respective point clouds and performs the alignment according to them. Given surgical instrument symmetry, automatic alignment is challenging and often fails. It was possible for less than 10 cases. For the remaining cases, semi-manual alignment had to be performed, introducing potential human error. Assessing alignment success is always performed by the post-processing operator. Figure 5c1,c2 shows semi-manual alignment, resulting in Figure 5c3. If automatic alignment succeeds, Figure 5c1,c2 steps were bypassed, directly displaying Figure 5c3. Undesired data in the point cloud post-alignment is then removed manually.
A watertight mesh was created from the resulting point cloud. Models were generated as STL twice: once with the ‘remove highlight’ function on, and once without. ‘Remove highlight’ eliminates spikes. Spikes are defined as triangles arising from point cloud data that deviate from the smooth surrounding surfaces. These spikes are often induced due to reflective surfaces. Unfortunately, the manufacturer, Shining 3D,does not provide exact algorithmic details for surface model generation.
Three instruments were scanned in different configurations. For example, we scanned a surgical knife in two configurations: one with the protection clip, and another with the protection clip removed.
Instruments not completely captured with two scans, the initial scan and the flip scan, were scanned with three or four scans instead. If the instrument did not fit into the scanner, or multiple additional scans overloaded the PC’s RAM, it was not scanned with the Autoscan Inspec, but the Artec Leo instead. The final STL files are available in the data repository30.
Generating multiple likewise models
Blender add-on
To illustrate how virtual instruments can be easily transformed into multiple likewise instruments, we implemented a Blender Python add-on to assist in this task, see Fig. 6. The basis for the add-on are the simple deform function in Blender, allowing to bend, twist, taper and stretch the 3D meshes, and basic transformations, including rotation, translation and scaling.
Although these operations impact the entire model’s mesh, users have the option to automatically generate a fitting lattice, Figure 6a1-2, and manually define vertex groups within the lattice, such as those corresponding to handle and tip regions. A lattice is a 3D non-renderable deformation cage. When assigning these vertex groups within the lattice, they become accessible within the add-on. Predetermined groups can be linked to specific operations. This methodology ensures smoother transitions between vertex groups within the final mesh than working with the vertex groups directly.
The user can determine which operation to apply on which group of the lattice, including the minimum and maximum angle, amount of units or factors. A step size between the minimum and maximum value can be determined as well. After the user manually creates vertex group(s) and determines desired deform and transformation operation parameters, the add-on will automatically create all models and save them to the current Blender directory as STL files. It’s important to realise that the add-on will use the local axis of the model when specifying which action to perform around a certain axis.
If a factor or angle is zero, or scaling factor is one, the model is not saved since applying the change would not lead to a changed model. The same applies when rotation, scaling or translation operations are applied on the entire mesh. Even though the model’s position and orientation related to the world coordinate frame origin change, the model itself does not.
The add-on also has the option to create, show and save several differently smoothed models, using the subsurface, corrective smooth, Laplacian smooth and normal smooth modifiers. The add-on can also create multiple rescaled and smoothed meshes from all STL files found in a given directory path, e.g., the current Blender directory path. These functions have not been used in our case since we found that a Python script utilising the Trimesh library does this more efficiently and automatically, and it would also take enormous amounts of data storage. Therefore, we deemed it more appropriate to show an example and let potential users run the scripts themselves.
As an example of the Blender add-on usage, a single instrument from 12 different classes were modified using the Blender Python add-on. An overview of the settings and results while using the Blender add-on to create examples is given within the used instrument folder in the data repository30. The results for our example instruments can be found within the data repository30.
Python script
We furthermore provide a Python script to apply additional modifications to the instruments within our collection, to enlarge the dataset even further. Using the Trimesh library, this script has the capability of smoothing and scaling the meshes along their local axes. Suppose we scale one model with factors from 0.5 to 1.5 with steps of 0.1 along all three axes, we could generate 113−1 = 1,330
likewise model. We subtract one likewise model since rescaling with a factor of one along all three axes results in an unchanged model. This can be useful as a form of data augmentation for creating deep-learning datasets. The Python script also implements automatic Taubin, Laplacian and Humpfrey smoothing on the input models32,33.
At the start of the process, the user is prompted to provide a main directory, along with inputs if scaling and smoothing are desired, along with the scaling factors and number of iterations, respectively. The script will then search for all STL files within the main directory and its sub-directories and apply the specified scaling and smoothing operations accordingly. The resulting files are exported to newly created folders, which mirror the folder structure of the main directory provided by the user. We provide examples for two instruments in our data repository30, as shown in Fig. 7.
Analysis
All scanned models were made watertight in the proprietary software and visually inspected in Microsoft 3D Viewer. The use of scanning spray and recommended settings as described in section “Instrument preparation” until “Data acquisition with Autoscan Inspec and UltraScan”, is expected to result in submillimetre precise scans, as specified by manufacturers Artec3D and Shining 3D.
Using Python with the Numpy and Trimesh libraries, all scanned models were aligned with a tight enclosing bounding box. The bounding box was automatically oriented to minimize its volume. From this bounding box, the width, height and length in millimetres were calculated. These measurements were compared to the physical model using a flexible millimetre ruler to test if the models were precise on a millimetre level. For this assessment, we utilised the virtual models without the “remove highlight” function (Autoscan Inspec) and models generated with sharp fusion (Artec Leo).
The Trimesh library was also used to calculate the volume of the models. When comparing models from identical scans, i.e., the smooth and sharp fusion, or the scans with and without removing highlights, the average difference is less than 1 millimetre in all directions.
We scanned ‘Arterial Clamp Halsted Mosquito 1’ twice with Autocan Inspec and “Inspec Mouth Gag Denhart 1” with both 3D scanners. The deviation between these virtual models was under 1.5 millimetres, suggesting that human error in the manual post-processing contributed less than 1.5 millimetre of error. Individual measurements can be found in “Table 2 MeasurementsOnVirtualModelsUsingTrimeshLibrary” in the data repository30.
Data Records
The final data is stored in a repository30 according to the folder structure shown in Fig. 7. An overview of the classes is given in Fig. 2. The collection consists of scans of 103 surgical instruments. 49 instruments were scanned with the Artec Leo, resulting in double the number of STL files, one with sharp fusion settings and one with smooth fusion settings with minimal manual editing. 55 instruments were scanned with the Autoscan Inspec. For these scans, we provide one STL model created with the automatic “remove highlight” function, and one model without using this function.
We scanned 11 instruments in various configurations, including open and closed grip positions, as well as with and without blades on scalpel handles. Among these, eight instruments were scanned using the Artec Leo, and three instruments were scanned using the Autoscan Inspec. These specific instrument names can be accessed in the ‘Overview’ Word file available in the data repository30.
To demonstrate the capabilities of our provided add-on, we smoothed, deformed, or partially rotated, scaled, or translated 12 STL models from different instrument classes, resulting in 6,263 likewise instruments.
We furthermore applied our Python script, ‘RescaleAndSmooth_v2.py’ to create a multitude of similar models on two instrument classes, Chisels and Langenbeck Retractors, generating 5,380 STL by rescaling and smoothing.
For the original (non-deformed or scaled) 3D models, we report the total volume in cubic millimetres, length, width and height in millimetres. These measurements were performed on the virtual models using a Python script which is included in the data repository30.
We will also share these original instruments on MedShapeNet (https://medshapenet.ikim.nrw) to make them easily available and browsable by the medical imaging and computing community31.
Technical Validation
The main issue regarding technical validation of the 3D models based on the medical instruments is the spatial geometry reconstructed by the scanners, Artec Leo and Autoscan Inspec, and their proprietary software. The manufacturers Artec 3D and Shining 3D evaluated the spatial 3D resolution to be 0.2 millimetre and 0.05 millimetre respectively. When taking into account that 3D scanning spray and recommended settings are used, submillimetre accuracy is expected.
Deviations in spatial geometry between 3D model and physical instrument are therefore likely due to manual post-processing. These may be due to unknowingly removing point cloud data while removing artefacts. They could also be caused by a misalignment of multiple scans while fusing them into a single model. Therefore, we manually compared the model’s length calculated using the Trimesh library with the instrument’s physical length. Still, since this measurement is difficult for three dimensional objects with curvatures, small errors might go unnoticed. We want to point out that these minor deviations won’t hamper the collections’ utility for our use cases mentioned throughout this article.
A limiting factor of the presented data is the lack of real texture information, which is lost due to the necessary usage of 3D scanning spray. While transparent scanning sprays exist, we found that they significantly impede scanning accuracy. Therefore, we opted for precise scanning in lieu of real texture. The Autoscan Inspec is capable of scanning without scanning spray, but since less data is acquired without spray, producing a high-quality mesh requires too many flipscans, which makes post-processing infeasible. Realistic textures can, however, be applied easily using Blender or within a game engine, e.g., Unity3D (Unity Technologies, San Francisco, California, United States) or Unreal Engine, see Fig. 8 or Fig. 2.
Since we store our models in a standard format (STL), they are compatible with a large variety of visualisation and processing software.
Usage Notes
STL is widely used for computer-aided design and manufacturing34. They can be analysed, manipulated and processed with open-source software such as Blender, 3D Viewer, Trimesh in Python and many more. The STL models could be used for computer vision applications such as instrument detection, segmentation, or tracking, as well as in medical mixed-reality simulation scenarios. To this end, the models can be integrated with game engines, such as Unity3D or Unreal Engine, to create mixed-reality apps or to render an unlimited number of photorealistic scenes. The models can directly be used or manipulated to have a more appropriate texture. With some adaptation, the models can also be used for 3D printing. The unprocessed, raw scan data, however, can only be accessed with the commercially available software of the proprietary 3D scanners and is of less interest for our mentioned use cases.
The material can be copied and redistributed in any medium or format. Furthermore, the data is free to adapt, remix, transform, and build upon the material. The data within this work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0) (https://creativecommons.org/licenses/by/4.0/).
Code availability
The proprietary software of Artec Leo and Autoscan Inspec was used for processing the scans. For the analysis of the dimensions of instruments, a Python script was written using the Trimesh, Numpy and Pandas library. Another Python script is provided for rescaling and smoothing the original models into a multifold of models, using deformations and affine transformations. All code is available in the data repository30.
References
-
Kirkup, J. R. The history and evolution of surgical instruments. I. Introduction. Ann R Coll Surg Engl 63, 279–285 (1981).
Google Scholar
-
Kirkup, J. R. The history and evolution of surgical instruments. VII. Spring forceps (tweezers), hooks and simple retractors. Ann R Coll Surg Engl 78, 544–552 (1996).
Google Scholar
-
Donovan, T. E., Boushell, L. W. & Eidson, R. S. in Sturdevant’s Art and Science of Operative Dentistry 7th edition (eds. Ritter, A. V., Boushell, L. W. & Walter, R.) Ch. 14 (Elsevier, 2019).
-
Singh, H., Kaur, M., Dhillon, J. S., Mann, J. S. & Kumar, A. Evolution of restorative dentistry from past to present. Indian J Dent Sci 9, 38–43 (2017).
Google Scholar
-
Holmgren, C. J., Roux, D. & Doméjean, S. Minimal intervention dentistry: part 5. Atraumatic restorative treatment (ART) – a minimum intervention and minimally invasive approach for the management of dental caries. Br Dent J 214, 11–18 (2013).
Google Scholar
-
Mamoun, J. Use of elevator instruments when luxating and extracting teeth in dentistry: clinical techniques. J Korean Assoc Oral Maxillofac Surg 43, 204–211 (2017).
Google Scholar
-
Rodrigues, M., Mayo, M. & Patros, P. Surgical tool datasets for machine learning research: A survey. Int J Comput Vis 130, 2222–2248 (2022).
Google Scholar
-
Useche Murillo, P. C., Moreno, R. J. & Pinzon Arenas, J. O. Comparison between CNN and Haar classifiers for surgical instrumentation classification. Ces 10, 1351–1363 (2017).
Google Scholar
-
Ramesh, A., Beniwal, M., Uppar, A. M., Vikas, V. & Rao, M. Microsurgical tool detection and characterization in intra-operative neurosurgical videos. Annu Int Conf IEEE Eng Med Biol Soc 2021, 2676–2681 (2021).
Google Scholar
-
Sestini, L., Rosa, B., De Momi, E., Ferrigno, G. & Padoy, N. FUN-SIS: A Fully Unsupervised approach for Surgical Instrument Segmentation. Med Image Anal 85, 102751 (2023).
Google Scholar
-
Kong, X. et al. Accurate instance segmentation of surgical instruments in robotic surgery: model refinement and cross-dataset evaluation. Int J Comput Assist Radiol Surg 16, 1607–1614 (2021).
Google Scholar
-
Paulin, G. & Ivasic-Kos, M. Review and analysis of synthetic dataset generation methods and techniques for application in computer vision. Artif Intell Rev 56, 9221–9265 (2023).
Google Scholar
-
Ferreira, A. et al. GAN-based generation of realistic 3D data: A systematic review and taxonomy. Preprint at http://arxiv.org/abs/2207.01390 (2022).
-
Zheng, J. et al. Structured3D: A Large Photo-Realistic Dataset for Structured 3D Modeling. In Computer Vision – ECCV 2020 (eds. Vedaldi, A., Bischof, H., Brox, T. & Frahm, J.-M.). 519–535 (Springer International Publishing, 2020)
-
Zhou, P. et al. Deep Video Inpainting Detection. In 32nd British Machine Vision Conference (BMVC) 2021 Online November 22-25. 35-47 (BMVA Press, 2021).
-
Xu, R., Li, X., Zhou, B. & Loy, C. C. Deep Flow-Guided Video Inpainting. IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3723–3732 (2019).
-
Liao, L., Xiao, J., Wang, Z., Lin, C.-W. & Satoh, S. Image Inpainting Guided by Coherence Priors of Semantics and Textures. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 6535–6544 (2021).
-
Yu, J. et al. Free-form image inpainting with gated convolution. IEEE/CVF International Conference on Computer Vision (ICCV). 4470–4479 (2019).
-
Kari, M. et al. TransforMR: Pose-Aware Object Substitution for Composing Alternate Mixed Realities. IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 69–79 (2021).
-
Gsaxner, C., Li, J., Antonio, P., Schmalstieg, D., & Egger, J. Inside-out instrument tracking for surgical navigation in augmented reality Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology. 1-11 (2021).
-
Lungu, A. J. et al. A review on the applications of virtual reality, augmented reality and mixed reality in surgical simulation: an extension to different kinds of surgery. Expert Rev Med Devices 18, 47–62 (2021).
Google Scholar
-
Lenaga, N. et al. First deployment of diminished reality for anatomy education. 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct) 294–296 (2016).
-
Mori, S., Ikeda, S. & Saito, H. A survey of diminished reality: Techniques for visually concealing, eliminating, and seeing through real objects. IPSJ Transactions on Computer Vision and Applications 9, 17 (2017).
Google Scholar
-
Zhao, L., Patel, P. K. & Cohen, M. Application of virtual surgical planning with computer assisted design and manufacturing technology to cranio-maxillofacial surgery. Arch Plast Surg 39, 309–316 (2012).
Google Scholar
-
Ayoub, A. & Pulijala, Y. The application of virtual reality and augmented reality in Oral & Maxillofacial Surgery. BMC Oral Health 19, 238 (2019).
Google Scholar
-
Hua, J., Aziz, S. & Shum, J. W. Virtual surgical planning in oral and maxillofacial surgery. Oral and Maxillofacial Surgery Clinics of North America 31, 519–530 (2019).
Google Scholar
-
Shorten, C. & Khoshgoftaar, T. M. A survey on Image Data Augmentation for Deep Learning. J Big Data 6, 60 (2019).
Google Scholar
-
Wodzinski, M., Daniol, M. & Hemmerling, D. Improving the Automatic Cranial Implant Design in Cranioplasty by Linking Different Datasets. In Towards the Automatization of Cranial Implant Design in Cranioplasty II (eds. Li, J. & Egger, J.) 29–44 (Springer International Publishing, 2021).
-
Ellis, D. G. & Aizenberg, M. R. Deep Learning Using Augmentation via Registration: 1st Place Solution to the AutoImplant 2020 Challenge. In Towards the Automatization of Cranial Implant Design in Cranioplasty (eds. Li, J. & Egger, J.) 47–55 (Springer International Publishing, 2020).
-
Luijten, G. et al. 3D-COSI ~ 3D Collection of Surgical Instruments, zenodo, https://doi.org/10.5281/zenodo.10091715 (2023).
-
Li, J. et al. MedShapeNet — A Large-Scale Dataset of 3D Medical Shapes for Computer Vision. Preprint at http://arxiv.org/abs/2308.16139 (2023).
-
Barroqueiro, B., Andrade-Campos, A., Dias-de-Oliveira, J. & Valente, R. A. F. Bridging Between Topology Optimization and Additive Manufacturing via Laplacian Smoothing. Journal of Mechanical Design 143 (2021).
-
Vollmer, J., Mencl, R. & Muller, H. Improved Laplacian Smoothing of Noisy Surface Meshes. Computer Graphics Forum 18, 131–138 (1999).
Google Scholar
-
Noorani, R. Rapid Prototyping: Principles and Applications 1st edn (John Wiley & Sons, 2005).
Acknowledgements
This work was supported by the REACT-EU project KITE (grant number: EFRE-0801977, Plattform für KI-Translation Essen, https://kite.ikim.nrw/), FWF enFaced 2.0 (grant number: KLI-1044, https://enfaced2.ikim.nrw/), and the Clinician Scientist Program of the Faculty of Medicine RWTH Aachen University. Christina Gsaxner was funded by the Advanced Research Opportunities Program (AROP) of RWTH Aachen University. Instruments were provided by the Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen. The STL files of the surgical instruments are available within MedShapeNet (https://medshapenet.ikim.nrw)31. All mentioned data except the raw scans are available in the data repository30.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Contributions
Collected the data: G.L., F.H., B.P. Performed screening and selection: G.L., N.A., F.H., B.P., J.E., Processed the data: G.L., N.A., Contributed materials and analysis: G.L., C.G., A.P., M.K., B.P., J.E. Wrote the paper: G.L., C.G., J.L., A.P., M.K., J.K., B.P., J.E., Supervised the project: X.C., B.P., J.E.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Reprints and Permissions
About this article
Cite this article
Luijten, G., Gsaxner, C., Li, J. et al. 3D surgical instrument collection for computer vision and extended reality.
Sci Data 10, 796 (2023). https://doi.org/10.1038/s41597-023-02684-0
-
Received: 25 April 2023
-
Accepted: 23 October 2023
-
Published: 11 November 2023
-
DOI: https://doi.org/10.1038/s41597-023-02684-0