Publications


JabRef references
Matching entries: 0
settings...

2024

  • B. Becker, D. Bhattacharya, C. Betz, A. S. Hoffmann (2024), "Validation and correlation with clinical data of a newly developed computer aided diagnostic system for the classification of paranasal anomalies in the maxillary sinus from MRI images", Laryngo-Rhino-Otologie. Vol. 103(S 02) Georg Thieme Verlag KG. [Abstract] [BibTeX] [DOI] [URL]
  • Abstract: Large scale population studies are used to analyse the rate of finding sinus opacities in cranial MRIs (cMRI). Artificial Intelligence support systems can automate the detection of sinus opacities and reduce the workload of clinicians. We developed and evaluated a Computer Aided Diagnostics system based on a 3D Convolutional Neural Network (3D CNN) that automatically extracts and classifies maxillary sinus (MS) from cMRI.
    BibTeX:
    @article{BBecker2024,
      author = {Becker, Benjamin and Bhattacharya, Debayan and Betz, Christian and Hoffmann, Anna Sophie},
      title = {Validation and correlation with clinical data of a newly developed computer aided diagnostic system for the classification of paranasal anomalies in the maxillary sinus from MRI images},
      journal = {Laryngo-Rhino-Otologie},
      publisher = {Georg Thieme Verlag KG},
      year = {2024},
      volume = {103},
      number = {S 02},
      url = {http://www.thieme-connect.de/products/ejournals/abstract/10.1055/s-0044-1784518},
      doi = {10.1055/s-0044-1784518}
    }
    
  • L. Beckers, S. Gerlach, O. Lübke, A. Schlaefer, S. Schupp (2024), "Sliced online model checking for optimizing the beam scheduling problem in robotic radiation therapy" Vol. 399 NICTA. [BibTeX] [DOI] [URL]
  • BibTeX:
    @conference{Beckers_Gerlach_Lübke_Schlaefer_Schupp_2024,
      author = {Beckers, Lars and Gerlach, Stefan and Lübke, Ole and Schlaefer, Alexander and Schupp, Sibylle},
      title = {Sliced online model checking for optimizing the beam scheduling problem in robotic radiation therapy},
      publisher = {NICTA},
      year = {2024},
      volume = {399},
      url = {https://hdl.handle.net/11420/47310},
      doi = {10.15480/882.9528}
    }
    
  • F. Behrendt, D. Bhattacharya, J. Krüger, R. Opfer, A. Schlaefer (2024), "Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI.", In Medical Imaging with Deep Learning., 10-12 Jul, 2024. Vol. 227.,1019-1032. PMLR:. [Abstract] [BibTeX] [URL]
  • Abstract: The use of supervised deep learning techniques to detect pathologies in brain MRI scans can be challenging due to the diversity of brain anatomy and the need for annotated data sets. An alternative approach is to use unsupervised anomaly detection, which only requires sample-level labels of healthy brains to create a reference representation. This reference representation can then be compared to unhealthy brain anatomy in a pixel-wise manner to identify abnormalities. To accomplish this, generative models are needed to create anatomically consistent MRI scans of healthy brains. While recent diffusion models have shown promise in this task, accurately generating the complex structure of the human brain remains a challenge. In this paper, we propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy, using spatial context to guide and improve reconstruction. We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
    BibTeX:
    @inproceedings{pmlr-v227-behrendt24a,
      author = {F. Behrendt and D. Bhattacharya and J. Krüger and R. Opfer and A. Schlaefer},
      editor = {In Oguz, Ipek and Noble, Jack and Li, Xiaoxiao and Styner, Martin and Baumgartner, Christian and Rusu, Mirabela and Heinmann, Tobias and Kontos, Despina and Landman, Bennett and Dawant, Benoit (Eds.)},
      title = {Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI.},
      booktitle = {Medical Imaging with Deep Learning},
      publisher = {PMLR:},
      year = {2024},
      volume = {227.},
      pages = {1019-1032},
      url = {https://proceedings.mlr.press/v227/behrendt24a.html}
    }
    
  • F. Behrendt, D. Bhattacharya, L. Maack, J. Krüger, R. Opfer, R. Mieling, A. Schlaefer (2024), "Diffusion Models with Ensembled Structure-Based Anomaly Scoring for Unsupervised Anomaly Detection", 2024 IEEE International Symposium on Biomedical Imaging (ISBI)., In 2024 IEEE International Symposium on Biomedical Imaging (ISBI). ,1-4. [Abstract] [BibTeX] [DOI]
  • Abstract: Supervised deep learning techniques show promise in medical image analysis. However, they require comprehensive annotated data sets, which poses challenges, particularly for rare diseases. Consequently, unsupervised anomaly detection (UAD) emerges as a viable alternative for pathology segmentation, as only healthy data is required for training. However, recent UAD anomaly scoring functions often focus on intensity only and neglect structural differences, which impedes the segmentation performance. This work investigates the potential of Structural Similarity (SSIM) to bridge this gap. SSIM captures both intensity and structural disparities and can be advantageous over the classical l1 error. However, we show that there is more than one optimal kernel size for the SSIM calculation for different pathologies. Therefore, we investigate an adaptive ensembling strategy for various kernel sizes to offer a more pathology-agnostic scoring mechanism. We demonstrate that this ensembling strategy can enhance the performance of DMs and mitigate the sensitivity to different kernel sizes across varying pathologies, highlighting its promise for brain MRI anomaly detection.
    BibTeX:
    @inproceedings{Behrendt.2024Isbi,
      author = {Behrendt, F. and Bhattacharya, D. and Maack, L. and Krüger, J. and Opfer, R. and Mieling, R. and Schlaefer, A.},
      title = {Diffusion Models with Ensembled Structure-Based Anomaly Scoring for Unsupervised Anomaly Detection},
      booktitle = {2024 IEEE International Symposium on Biomedical Imaging (ISBI)},
      journal = {2024 IEEE International Symposium on Biomedical Imaging (ISBI)},
      year = {2024},
      pages = {1-4},
      doi = {10.1109/ISBI56570.2024.10635828}
    }
    
  • F. Behrendt, D. Bhattacharya, L. Maack, J. Krüger, R. Opfer, A. Schlaefer (2024), "Combining Reconstruction-based Unsupervised Anomaly Detection with Supervised Segmentation for Brain MRIs", In Submitted to Medical Imaging with Deep Learning. ,In Print. [Abstract] [BibTeX] [URL]
  • Abstract: In contrast to supervised deep learning approaches, unsupervised anomaly detection (UAD) methods can be trained with healthy data only and do not require pixel-level annotations, enabling the identification of unseen pathologies. While this is promising for clinical screening tasks, reconstruction-based UAD methods fall short in segmentation accuracy compared to supervised models. Therefore, self-supervised UAD approaches have been proposed to improve segmentation accuracy. Typically, synthetic anomalies are used to train a segmentation network in a supervised fashion. However, this approach does not effectively generalize to real pathologies. We propose a framework combining reconstruction-based and self-supervised UAD methods to improve both segmentation performance for known anomalies and generalization to unknown pathologies. The framework includes an unsupervised diffusion model trained on healthy data to produce pseudo-healthy reconstructions and a supervised Unet trained to delineate anomalies from deviations between input- reconstruction pairs. Besides the effective use of synthetic training data, this framework allows for weakly-supervised training with small annotated data sets, generalizing to unseen pathologies. Our results show that with our approach, utilizing annotated data sets during training can substantially improve the segmentation performance for in-domain data while maintaining the generalizability of reconstruction-based approaches to pathologies unseen during training.
    BibTeX:
    @inproceedings{behrendt2024combining,
      author = {Finn Behrendt and Debayan Bhattacharya and Lennart Maack and Julia Krüger and Roland Opfer and Alexander Schlaefer},
      title = {Combining Reconstruction-based Unsupervised Anomaly Detection with Supervised Segmentation for Brain MRIs},
      booktitle = {Submitted to Medical Imaging with Deep Learning},
      year = {2024},
      pages = {In Print},
      url = {https://openreview.net/forum?id=iWfUcg4FrD}
    }
    
  • F. Behrendt, D. Bhattacharya, R. Mieling, L. Maack, J. Kruuml;ger, R. Opfer, A. Schlaefer (2024), "Leveraging the Mahalanobis Distance to Enhance Unsupervised Brain MRI Anomaly Detection", In Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024. Cham ,394-404. Springer Nature Switzerland. [Abstract] [BibTeX] [DOI]
  • Abstract: Unsupervised Anomaly Detection (UAD) methods rely on healthy data distributions to identify anomalies as outliers. In brain MRI, a common approach is reconstruction-based UAD, where generative models reconstruct healthy brain MRIs, and anomalies are detected as deviations between input and reconstruction. However, this method is sensitive to imperfect reconstructions, leading to false positives that impede the segmentation. To address this limitation, we construct multiple reconstructions with probabilistic diffusion models. We then analyze the resulting distribution of these reconstructions using the Mahalanobis distance to identify anomalies as outliers. By leveraging information about normal variations and covariance of individual pixels within this distribution, we effectively refine anomaly scoring, leading to improved segmentation. Our experimental results demonstrate substantial performance improvements across various data sets. Specifically, compared to relying solely on single reconstructions, our approach achieves relative improvements of 15.9%, 35.4%, 48.0%, and 4.7% in terms of AUPRC for the BRATS21, ATLAS, MSLUB and WMH data sets, respectively.
    BibTeX:
    @inproceedings{10.1007/978-3-031-72120-5_37,
      author = {Behrendt, Finn and Bhattacharya, Debayan and Mieling, Robin and Maack, Lennart and Krüger, Julia and Opfer, Roland and Schlaefer, Alexander},
      editor = {Linguraru, Marius George and Dou, Qi and Feragen, Aasa and Giannarou, Stamatia and Glocker, Ben and Lekadir, Karim and Schnabel, Julia A.},
      title = {Leveraging the Mahalanobis Distance to Enhance Unsupervised Brain MRI Anomaly Detection},
      booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
      publisher = {Springer Nature Switzerland},
      year = {2024},
      pages = {394-404},
      doi = {10.1007/978-3-031-72120-5_37}
    }
    
  • F. Behrendt, S. Sonawane, D. Bhattacharya, L. Maack, J. Krüger, R. Opfer, A. Schlaefer (2024), "Quantitative evaluation of activation maps for weakly-supervised lung nodule segmentation", Proc.SPIE., In Medical Imaging 2024: Computer-Aided Diagnosis SPIE., April, 2024. Vol. 12927,129272P. SPIE. [Abstract] [BibTeX] [DOI] [URL]
  • Abstract: The manual assessment of chest radiographs by radiologists is a time-consuming and error-prone process that relies on the availability of trained professionals. Deep learning methods have the potential to alleviate the workload of radiologists in pathology detection and diagnosis. However, one major drawback of deep learning methods is their lack of explainable decision-making, which is crucial in computer-aided diagnosis. To address this issue, activation maps of the underlying convolutional neural networks (CNN) are frequently used to indicate the regions of focus for the network during predictions. However, often, an evaluation of these activation maps concerning the actual predicted pathology is missing. In this study, we quantitatively evaluate the usage of activation maps for segmenting pulmonary nodules in chest radiographs. We compare transformer-based, CNN-based, and hybrid architectures using different visualization methods. Our results show that although high performance can be achieved in the classification task across all models, the activation masks show little correlation with the actual position of the nodules.
    BibTeX:
    @inproceedings{Behrendt2024.,
      author = {Behrendt, Finn and Sonawane, Suyash and Bhattacharya, Debayan and Maack, Lennart and Krüger, Julia and Opfer, Roland and Schlaefer, Alexander},
      editor = {Weijie Chen and Susan M. Astley},
      title = {Quantitative evaluation of activation maps for weakly-supervised lung nodule segmentation},
      booktitle = {Medical Imaging 2024: Computer-Aided Diagnosis SPIE},
      journal = {Proc.SPIE},
      publisher = {SPIE},
      year = {2024},
      volume = {12927},
      pages = {129272P},
      url = {https://doi.org/10.1117/12.3006416},
      doi = {10.1117/12.3006416}
    }
    
  • D. Bhattacharya, B. T. Becker, F. Behrendt, D. Beyersdorff, E. Petersen, M. Petersen, B. Cheng, D. Eggert, C. Betz, A. Schlaefer, A. S. Hoffmann (2024), "Computer-Aided Diagnosis of Maxillary Sinus Anomalies: Validation and Clinical Correlation", The Laryngoscope. Vol. 134(9),3927-3934. [Abstract] [BibTeX] [DOI] [URL]
  • Abstract: Objective Computer aided diagnostics (CAD) systems can automate the differentiation of maxillary sinus (MS) with and without opacification, simplifying the typically laborious process and aiding in clinical insight discovery within large cohorts. Methods This study uses Hamburg City Health Study (HCHS) a large, prospective, long-term, population-based cohort study of participants between 45 and 74 years of age. We develop a CAD system using an ensemble of 3D Convolutional Neural Network (CNN) to analyze cranial MRIs, distinguishing MS with opacifications (polyps, cysts, mucosal thickening) from MS without opacifications. The system is used to find correlations of participants with and without MS opacifications with clinical data (smoking, alcohol, BMI, asthma, bronchitis, sex, age, leukocyte count, C-reactive protein, allergies). Results The evaluation metrics of CAD system (Area Under Receiver Operator Characteristic: 0.95, sensitivity: 0.85, specificity: 0.90) demonstrated the effectiveness of our approach. MS with opacification group exhibited higher alcohol consumption, higher BMI, higher incidence of intrinsic asthma and extrinsic asthma. Male sex had higher prevalence of MS opacifications. Participants with MS opacifications had higher incidence of hay fever and house dust allergy but lower incidence of bee/wasp venom allergy. Conclusion The study demonstrates a 3D CNN's ability to distinguish MS with and without opacifications, improving automated diagnosis and aiding in correlating clinical data in population studies. Level of Evidence 3 Laryngoscope, 134:3927–3934, 2024
    BibTeX:
    @article{https://doi.org/10.1002/lary.31413,
      author = {Bhattacharya, Debayan and Becker, Benjamin Tobias and Behrendt, Finn and Beyersdorff, Dirk and Petersen, Elina and Petersen, Marvin and Cheng, Bastian and Eggert, Dennis and Betz, Christian and Schlaefer, Alexander and Hoffmann, Anna Sophie},
      title = {Computer-Aided Diagnosis of Maxillary Sinus Anomalies: Validation and Clinical Correlation},
      journal = {The Laryngoscope},
      year = {2024},
      volume = {134},
      number = {9},
      pages = {3927-3934},
      url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/lary.31413},
      doi = {10.1002/lary.31413}
    }
    
  • D. Bhattacharya, F. Behrendt, B. T. Becker, L. Maack, D. Beyersdorff, E. Petersen, M. Petersen, B. Cheng, D. Eggert, C. Betz, A. S. Hoffmann, A. Schlaefer (2024), "Self-supervised learning for classifying paranasal anomalies in the maxillary sinus", International Journal of Computer Assisted Radiology and Surgery. Vol. 19(9),1713-1721. [Abstract] [BibTeX] [DOI] [URL]
  • Abstract: Paranasal anomalies, frequently identified in routine radiological screenings, exhibit diverse morphological characteristics. Due to the diversity of anomalies, supervised learning methods require large labelled dataset exhibiting diverse anomaly morphology. Self-supervised learning (SSL) can be used to learn representations from unlabelled data. However, there are no SSL methods designed for the downstream task of classifying paranasal anomalies in the maxillary sinus (MS).
    BibTeX:
    @article{,
      author = {Bhattacharya, Debayan and Behrendt, Finn and Becker, Benjamin Tobias and Maack, Lennart and Beyersdorff, Dirk and Petersen, Elina and Petersen, Marvin and Cheng, Bastian and Eggert, Dennis and Betz, Christian and Hoffmann, Anna Sophie and Schlaefer, Alexander},
      title = {Self-supervised learning for classifying paranasal anomalies in the maxillary sinus},
      journal = {International Journal of Computer Assisted Radiology and Surgery},
      year = {2024},
      volume = {19},
      number = {9},
      pages = {1713--1721},
      url = {https://doi.org/10.1007/s11548-024-03172-5},
      doi = {10.1007/s11548-024-03172-5}
    }
    
  • D. Bhattacharya, F. Behrendt, L. Maack, B. T. Becker, D. Beyersdorff, E. Petersen, M. Petersen, B. Cheng, D. Eggert, C. Betz, A. S. Hoffmann, A. Schlaefer (2024), "Convolutional transformer network for paranasal anomaly classification in the maxillary sinus", In Medical Imaging 2024: Computer-Aided Diagnosis. Vol. 12927,1292717. SPIE. [Abstract] [BibTeX] [DOI] [URL]
  • Abstract: Large-scale population studies have examined the detection of sinus opacities in cranial MRIs. Deep learning methods, specifically 3D convolutional neural networks (CNNs), have been used to classify these anomalies. However, CNNs have limitations in capturing long-range dependencies across the low and high level features, potentially reducing performance. To address this, we propose an end-to-end pipeline using a novel deep learning network called ConTra-Net. ConTra-Net combines the strengths of CNNs and self-attention mechanisms of transformers to classify paranasal anomalies in the maxillary sinuses. Our approach outperforms 3D CNNs and 3D Vision Transformer (ViT), with relative improvements in F1 score of 11.68% and 53.5%, respectively. Our pipeline with ConTra-Net could serve as an alternative to reduce misdiagnosis rates in classifying paranasal anomalies.
    BibTeX:
    @inproceedings{10.1117/12.3005515,
      author = {Debayan Bhattacharya and Finn Behrendt and Lennart Maack and Benjamin Tobias Becker and Dirk Beyersdorff and Elina Petersen and Marvin Petersen and Bastian Cheng and Dennis Eggert and Christian Betz and Anna Sophie Hoffmann and Alexander Schlaefer},
      editor = {Weijie Chen and Susan M. Astley},
      title = {Convolutional transformer network for paranasal anomaly classification in the maxillary sinus},
      booktitle = {Medical Imaging 2024: Computer-Aided Diagnosis},
      publisher = {SPIE},
      year = {2024},
      volume = {12927},
      pages = {1292717},
      url = {https://doi.org/10.1117/12.3005515},
      doi = {10.1117/12.3005515}
    }
    
  • D. Bhattacharya, K. Reuter, F. Behrendt, L. Maack, S. Grube, A. Schlaefer (2024), "PolypNextLSTM: a lightweight and fast polyp video segmentation network using ConvNext and ConvLSTM", International Journal of Computer Assisted Radiology and Surgery. Vol. 19(10),2111-2119. [Abstract] [BibTeX] [DOI] [URL]
  • Abstract: Commonly employed in polyp segmentation, single-image UNet architectures lack the temporal insight clinicians gain from video data in diagnosing polyps. To mirror clinical practices more faithfully, our proposed solution, PolypNextLSTM, leverages video-based deep learning, harnessing temporal information for superior segmentation performance with least parameter overhead, making it possibly suitable for edge devices.
    BibTeX:
    @article{Bhattacharya2024,
      author = {Bhattacharya, Debayan and Reuter, Konrad and Behrendt, Finn and Maack, Lennart and Grube, Sarah and Schlaefer, Alexander},
      title = {PolypNextLSTM: a lightweight and fast polyp video segmentation network using ConvNext and ConvLSTM},
      journal = {International Journal of Computer Assisted Radiology and Surgery},
      year = {2024},
      volume = {19},
      number = {10},
      pages = {2111--2119},
      url = {https://doi.org/10.1007/s11548-024-03244-6},
      doi = {10.1007/s11548-024-03244-6}
    }
    
  • A. J. Boudreault, J. Spille, J. Wiltfang, A. Schlaefer, M. Neidhardt (2024), "6-Degree Vision based Tracking of a Mandible Phantom with Deep Learning", Current Directions in Biomedical Engineering. Vol. 10(1),5-8. [BibTeX] [DOI] [URL]
  • BibTeX:
    @article{BoudreaultSpilleWiltfangSchlaeferNeidhardt+2024+5+8,
      author = {A. J. Boudreault and J. Spille and J. Wiltfang and A. Schlaefer and M. Neidhardt},
      title = {6-Degree Vision based Tracking of a Mandible Phantom with Deep Learning},
      journal = {Current Directions in Biomedical Engineering},
      year = {2024},
      volume = {10},
      number = {1},
      pages = {5--8},
      url = {https://doi.org/10.1515/cdbme-2024-0102},
      doi = {10.1515/cdbme-2024-0102}
    }
    
  • S. Gerlach, F.-A. Siebert, A. Schlaefer (2024), "Robust stochastic optimization of needle configurations for robotic HDR prostate brachytherapy", Medical Physics. Vol. 51(1),464-475. [Abstract] [BibTeX] [DOI] [URL]
  • Abstract: Abstract Background Ideally, inverse planning for HDR brachytherapy (BT) should include the pose of the needles which define the trajectory of the source. This would be particularly interesting when considering the additional freedom and accuracy in needle pose which robotic needle placement enables. However, needle insertion typically leads to tissue deformation, resulting in uncertainty regarding the actual pose of the needles with respect to the tissue. Purpose To efficiently address uncertainty during inverse planning for HDR BT in order to robustly optimize the pose of the needles before insertion, that is, to facilitate path planning for robotic needle placement. Methods We use a form of stochastic linear programming to model the inverse treatment planning problem. To account for uncertainty, we consider random tissue displacements at the needle tip to simulate tissue deformation. Conventionally for stochastic linear programming, each simulated deformation is reflected by an addition to the linear programming problem which increases problem size and computational complexity substantially and leads to impractical runtime. We propose two efficient approaches for stochastic linear programming. First, we consider averaging dose coefficients to reduce the problem size. Second, we study weighting of the slack variables of an adjusted linear problem to approximate the full stochastic linear program. We compare different approaches to optimize the needle configurations and evaluate their robustness with respect to different amounts of tissue deformation. Results Our results illustrate that stochastic planning can improve the robustness of the treatment with respect to deformation. The proposed approaches approximating stochastic linear programming better conform to the tissue deformation compared to conventional linear programming. They show good correlation with the plans computed after deformation while reducing the runtime by two orders of magnitude compared to the complete stochastic linear program. Robust optimization of needle configurations takes on average 59.42 s. Skew needle configurations lead to mean coverage improvements compared to parallel needles from 0.39 to 2.94 percentage points, when 8 mm tissue deformation is considered. Considering tissue deformations from 4  to 10 mm during planning with weighted stochastic optimization and skew needles generally results in improved mean coverage from 1.77 to 4.21 percentage points. Conclusions We show that efficient stochastic optimization allows selecting needle configurations which are more robust with respect to potentially negative effects of target deformation and displacement on the achievable prescription dose coverage. The approach facilitates robust path planning for robotic needle placement.
    BibTeX:
    @article{Gerlach2024,
      author = {Gerlach, Stefan and Siebert, Frank-André and Schlaefer, Alexander},
      title = {Robust stochastic optimization of needle configurations for robotic HDR prostate brachytherapy},
      journal = {Medical Physics},
      year = {2024},
      volume = {51},
      number = {1},
      pages = {464-475},
      url = {https://aapm.onlinelibrary.wiley.com/doi/abs/10.1002/mp.16804},
      doi = {10.1002/mp.16804}
    }
    
  • S. Grube, S. Latus, F. Behrendt, O. Riabova, M. Neidhardt, A. Schlaefer (2024), "Needle tracking in low-resolution ultrasound volumes using deep learning", International Journal of Computer Assisted Radiology and Surgery. [Abstract] [BibTeX] [DOI] [URL]
  • Abstract: Clinical needle insertion into tissue, commonly assisted by 2D ultrasound imaging for real-time navigation, faces the challenge of precise needle and probe alignment to reduce out-of-plane movement. Recent studies investigate 3D ultrasound imaging together with deep learning to overcome this problem, focusing on acquiring high-resolution images to create optimal conditions for needle tip detection. However, high-resolution also requires a lot of time for image acquisition and processing, which limits the real-time capability. Therefore, we aim to maximize the US volume rate with the trade-off of low image resolution. We propose a deep learning approach to directly extract the 3D needle tip position from sparsely sampled US volumes.
    BibTeX:
    @article{,
      author = {Grube, Sarah and Latus, Sarah and Behrendt, Finn and Riabova, Oleksandra and Neidhardt, Maximilian and Schlaefer, Alexander},
      title = {Needle tracking in low-resolution ultrasound volumes using deep learning},
      journal = {International Journal of Computer Assisted Radiology and Surgery},
      year = {2024},
      url = {https://doi.org/10.1007/s11548-024-03234-8},
      doi = {10.1007/s11548-024-03234-8}
    }
    
  • S. Grube, M. Neidhardt, A.-K. Hermann, J. Sprenger, K. Abdolazizi, S. Latus, C. J. Cyron, A. Schlaefer (2024), "A Calibration Approach for Elasticity Estimation with Medical Tools", Current Directions in Biomedical Engineering. Vol. 10(2),99-102. [Abstract] [BibTeX] [DOI] [URL]
  • Abstract: Soft tissue elasticity is directly related to differentstages of diseases and can be used for tissue identification dur-ing minimally invasive procedures. By palpating a tissue witha robot in a minimally invasive fashion force-displacementcurves can be acquired. However, force-displacement curvesstrongly depend on the tool geometry which is often complexin the case of medical tools. Hence, a tool calibration proce-dure is desired to directly map force-displacement curves tothe corresponding tissue elasticity. We present an experimentalsetup for calibrating medical tools with a robot. First, we pro-pose to estimate the elasticity of gelatin phantoms by sphericalindentation with a state-of-the-art contact model. We estimateforce-displacement curves for different gelatin elasticities andtemperatures. Our experiments demonstrate that gelatin elas-ticity is highly dependent on temperature, which can lead to anelasticity offset if not considered. Second, we propose to usea more complex material model, e.g., a neural network, thatcan be trained with the determined elasticities. Consideringthe temperature of the gelatin sample we can represent differ-ent elasticities per phantom and thereby increase our trainingdata. We report elasticity values ranging from10to40 kPafora10 %gelatin phantom, depending on temperature.
    BibTeX:
    @article{GrubeNeidhardtHermannSprengerAbdolaziziLatusCyronSchlaefer+2024+99+102,
      author = {S. Grube and M. Neidhardt and A.-K. Hermann and J. Sprenger and K. Abdolazizi and S. Latus and C. J. Cyron and A. Schlaefer},
      title = {A Calibration Approach for Elasticity Estimation with Medical Tools},
      journal = {Current Directions in Biomedical Engineering},
      year = {2024},
      volume = {10},
      number = {2},
      pages = {99-102},
      url = {https://doi.org/10.1515/cdbme-2024-1077},
      doi = {10.1515/cdbme-2024-1077}
    }
    
  • S. Häussler, A. Schlaefer, C. Betz, M. Della Seta, D. Battacharya (2024), "Automatic segmentation and detection of Vestibular Schwannoma in MRI by Deep Learning", Laryngo-Rhino-Otologie. Vol. 103(S 02) Georg Thieme Verlag KG. [Abstract] [BibTeX] [DOI] [URL]
  • Abstract: Automatic segmentation and detection of pathologies in MRI by deep learning is an upcoming topic. We introduce a novel model combining two Convolutional Neural Network (CNN) models for the detection of VS by deep learning.
    BibTeX:
    @article{Haeussler2024,
      author = {Häussler, Sophia and Schlaefer, Alexander and Betz, Christian and Della Seta, Marta and Battacharya, Debayan},
      title = {Automatic segmentation and detection of Vestibular Schwannoma in MRI by Deep Learning},
      journal = {Laryngo-Rhino-Otologie},
      publisher = {Georg Thieme Verlag KG},
      year = {2024},
      volume = {103},
      number = {S 02},
      url = {http://www.thieme-connect.de/products/ejournals/abstract/10.1055/s-0044-1784536},
      doi = {10.1055/s-0044-1784536}
    }
    
  • S. Häussler, A. Schlaefer, C. Betz, M. Della Seta, D. Battacharya (2024), "Automatische Segmentierung und Detektion von Vestibularisschwannomen im MRT durch Deep Learning", Laryngo-Rhino-Otologie. Vol. 103(S 02) Georg Thieme Verlag KG. [Abstract] [BibTeX] [DOI] [URL]
  • Abstract: Die automatische Segmentierung und Detektion von Pathologien im MRT durch Deep Learning rückt immer mehr in den Vordergrund. Ziel der Studie ist die Einführung eines neuen kombinierten Convolutional Neural Network (CNN) Models zur automatischen Detektion von Vestibularisschwannomen.
    BibTeX:
    @article{,
      author = {Häussler, Sophia and Schlaefer, Alexander and Betz, Christian and Della Seta, Marta and Battacharya, Debayan},
      title = {Automatische Segmentierung und Detektion von Vestibularisschwannomen im MRT durch Deep Learning},
      journal = {Laryngo-Rhino-Otologie},
      publisher = {Georg Thieme Verlag KG},
      year = {2024},
      volume = {103},
      number = {S 02},
      url = {http://www.thieme-connect.de/products/ejournals/abstract/10.1055/s-0044-1783989},
      doi = {10.1055/s-0044-1783989}
    }
    
  • M. F. R. Ibrahim, T. Alkanat, M. Meijer, A. Schlaefer, P. Stelldinger (2024), "Artifact: End-to-End Multi-Modal Tiny-CNN for Cardiovascular Monitoring on Sensor Patches", In 2024 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops)., March, 2024. ,5-6. [Abstract] [BibTeX] [DOI]
  • Abstract: This document describes the content and usage of the code artifact files of the original paper “End-to-End Multi-Modal Tiny-CNN for Cardiovascular Monitoring on Sensor Patches”. In that work, we show the feasibility of applying deep learning for the classification of synchronized electrocardiogram and phonocardiogram recordings under very tight resource constraints. Our model employs an early fusion of data and uses convolutional layers to solve the problem of binary classification of anomalies. We use the “training-a” dataset of the Physionet Challenge 2016 database for evaluation. Further, we demonstrate the applicability of our model on edge devices, such as sensor patches, by estimating processor performance, power consumption, and silicon area.
    BibTeX:
    @inproceedings{10502566,
      author = {Ibrahim, Mustafa Fuad Rifet and Alkanat, Tunc and Meijer, Maurice and Schlaefer, Alexander and Stelldinger, Peer},
      title = {Artifact: End-to-End Multi-Modal Tiny-CNN for Cardiovascular Monitoring on Sensor Patches},
      booktitle = {2024 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops)},
      year = {2024},
      pages = {5-6},
      doi = {10.1109/PerComWorkshops59983.2024.10502566}
    }
    
  • S. Latus, M. Kulas, J. Sprenger, D. Bhattacharya, P. C. Breda, L. Wittig, T. Eixmann, G. Hüttmann, L. Maack, D. Eggert, C. Betz, A. Schlaefer (2024), "Motion-compensated OCT imaging of laryngeal tissue..", In Medical Imaging 2024: Image-Guided Procedures, Robotic Interventions, and Modeling SPIE.. Vol. 12928,1292809. [Abstract] [BibTeX] [DOI] [URL]
  • Abstract: The increasing incidence of laryngeal carcinomas requires approaches for early diagnosis and treatment. In clinical practice, white light endoscopy of the laryngeal region is typically followed by biopsy under general anesthesia. Thus, image based diagnosis using optical coherence tomography (OCT) has been proposed to study sub-surface tissue layers at high resolution. However, accessing the region of interest requires robust miniature OCT probes that can be forwarded through the working channel of a laryngoscope. Typically, such probes generate A-scans, i.e., single column depth images, which are rather difficult to interpret. We propose a novel approach using the endoscopic camera images to spatially align these A-scans. Given the natural tissue motion and movements of the laryngoscope, the resulting OCT images show a three-dimensional representation of the sub-surface structures, which is simpler to interpret. We present the overall imaging setup and the motion tracking method. Moreover, we describe an experimental setup to assess the precision of the spatial alignment. We study different tracking templates and report root-mean-squared errors of 0.08 mm and 0.18 mm for sinusoidal and freehand motion, respectively. Furthermore, we also demonstrate the in-vivo application of the approach, illustrating the benefit of spatially meaningful alignment of the A-scans to study laryngeal tissue.
    BibTeX:
    @inproceedings{Latus2024.,
      author = {S. Latus and M. Kulas and J. Sprenger and D. Bhattacharya and P. C. Breda and L. Wittig and T. Eixmann and G. Hüttmann and L. Maack and D. Eggert and C. Betz and A. Schlaefer},
      editor = {Jeffrey H. Siewerdsen and Maryam E. Rettmann},
      title = {Motion-compensated OCT imaging of laryngeal tissue..},
      booktitle = {Medical Imaging 2024: Image-Guided Procedures, Robotic Interventions, and Modeling SPIE.},
      year = {2024},
      volume = {12928},
      pages = {1292809},
      url = {https://doi.org/10.1117/12.3006729},
      doi = {10.1117/12.3006729}
    }
    
  • R. Mieling, S. Latus, F. Behrendt, D. Bhattacharya, M. Neidhardt, A. Schlaefer (2024), "Can Complex-Valued Neural Networks Improve Force Sensing with Optical Coherence Tomography?", In 2024 IEEE International Symposium on Biomedical Imaging (ISBI)., May, 2024. ,1-4. [Abstract] [BibTeX] [DOI]
  • Abstract: Deep learning has proven highly effective in processing optical coherence tomography (OCT) images. However, current approaches often treat complex OCT data as real-valued representations and mostly focus on the amplitude component. In contrast, OCT phase can detect submicrometer displacements, surpassing the resolution of amplitude-based methods. In this study, we investigate complex-valued and real-valued neural networks (cvNNs and rvNNs) for end-to-end needle tip force estimation from complex OCT data. We consider predicting absolute forces and force differences over time. We compare cvNNs with two rvNNs where amplitude and phase are either stacked in the channel dimension or processed separately in a dual-path architecture. Our results show that the amplitude-focused estimation of the absolute force is still best achieved with rvNNs. However, when estimating small force differences, where phase-sensitive OCT is particularly valuable, cvNNs outperform rvNNs with stacked inputs. Similar improvements can be achieved with separate processing in the dual-path models. The results emphasize the importance of model design in processing complex OCT signals and demonstrate the potential of cvNNs for phase-sensitive OCT. However, our work also highlights the current limitations of cvNNs, particularly computational cost and data dependency.
    BibTeX:
    @inproceedings{mieling2024can,
      author = {Mieling, R and Latus, S and Behrendt, F and Bhattacharya, D and Neidhardt, M and Schlaefer, A},
      title = {Can Complex-Valued Neural Networks Improve Force Sensing with Optical Coherence Tomography?},
      booktitle = {2024 IEEE International Symposium on Biomedical Imaging (ISBI)},
      year = {2024},
      pages = {1-4},
      doi = {10.1109/ISBI56570.2024.10635602}
    }
    
  • M. Neidhardt, S. Gerlach, F. N. Schmidt, I. A. K. Fiedler, S. Grube, B. Busse, A. Schlaefer (2024), "VR-based body tracking to stimulate musculoskeletal training", Current Directions in Biomedical Engineering. Vol. 10(2),111-114. [Abstract] [BibTeX] [DOI] [URL]
  • Abstract: Training helps to maintain and improve sufficientmuscle function, body control, and body coordination. Theseare important to reduce the risk of fracture incidents causedby falls, especially for the elderly or people recovering frominjury. Virtual reality training can offer a cost-effective and in-dividualized training experience. We present an application forthe HoloLens 2 to enable musculoskeletal training for elderlyand impaired persons to allow for autonomous training andautomatic progress evaluation. We designed a virtual downhillskiing scenario that is controlled by body movement to stimu-late balance and body control. By adapting the parameters of theski slope, we can tailor the intensity of the training to individualusers. In this work, we evaluate whether the movement data ofthe HoloLens 2 alone is sufficient to control and predict bodymovement and joint angles during musculoskeletal training. Werecord the movements of 10 healthy volunteers with externaltracking cameras and track a set of body and joint angles of theparticipant during training. We estimate correlation coefficientsand systematically analyze whether whole body movement canbe derived from the movement data of the HoloLens 2. No par-ticipant reports movement sickness effects and all were able toquickly interact and control their movement during skiing. Ourresults show a high correlation between HoloLens 2 movementdata and the external tracking of the upper body movement andjoint angles of the lower limbs.
    BibTeX:
    @article{NeidhardtGerlachSchmidtFiedlerGrubeBusseSchlaefer+2024+111+114,
      author = {M. Neidhardt and S. Gerlach and F. N. Schmidt and I. A. K. Fiedler and S. Grube and B. Busse and A. Schlaefer},
      title = {VR-based body tracking to stimulate musculoskeletal training},
      journal = {Current Directions in Biomedical Engineering},
      year = {2024},
      volume = {10},
      number = {2},
      pages = {111-114},
      url = {https://doi.org/10.1515/cdbme-2024-1080},
      doi = {10.1515/cdbme-2024-1080}
    }
    
  • M. Neidhardt, S. Latus, L. Maack, S. Gerlach, F. von Brackel, B. Busse, A. Schlaefer (2024), "VR-based Body Tracking for Homecare Training.", In Proceedings of the Smart Healthy Environments Converence International Conference 2024. [Abstract] [BibTeX]
  • Abstract: Training helps to maintain and improve sufficientmuscle function, body control, and body coordination. Theseare important to reduce the risk of fracture incidents causedby falls, especially for the elderly or people recovering frominjury. Virtual reality training can offer a cost-effective and in-dividualized training experience. We present an application forthe HoloLens 2 to enable musculoskeletal training for elderlyand impaired persons to allow for autonomous training andautomatic progress evaluation. We designed a virtual downhillskiing scenario that is controlled by body movement to stimu-late balance and body control. By adapting the parameters of theski slope, we can tailor the intensity of the training to individualusers. In this work, we evaluate whether the movement data ofthe HoloLens 2 alone is sufficient to control and predict bodymovement and joint angles during musculoskeletal training. Werecord the movements of 10 healthy volunteers with externaltracking cameras and track a set of body and joint angles of theparticipant during training. We estimate correlation coefficientsand systematically analyze whether whole body movement canbe derived from the movement data of the HoloLens 2. No par-ticipant reports movement sickness effects and all were able toquickly interact and control their movement during skiing. Ourresults show a high correlation between HoloLens 2 movementdata and the external tracking of the upper body movement andjoint angles of the lower limbs.
    BibTeX:
    @inproceedings{neidhardt2023vrhtr,
      author = {M. Neidhardt and S. Latus and L. Maack and S. Gerlach and F. von Brackel and B. Busse and A. Schlaefer},
      title = {VR-based Body Tracking for Homecare Training.},
      booktitle = {Proceedings of the Smart Healthy Environments Converence International Conference 2024},
      year = {2024}
    }
    
  • M. Neidhardt, R. Mieling, S. Latus, M. Fischer, T. Maurer, A. Schlaefer (2024), "A Modified da Vinci Surgical Instrument for OCE based Elasticity Estimation with Deep Learning.", In Proceedings of the 10th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob 1024). ,In Print. [Abstract] [BibTeX] [DOI] [URL]
  • Abstract: Robot-assisted surgery has advantages compared to conventional laparoscopic procedures, e.g., precise movement of the surgical instruments, improved dexterity, and high-resolution visualization of the surgical field. However, mechanical tissue properties may provide additional information, e.g., on the location of lesions or vessels. While elastographic imaging has been proposed, it is not readily available as an online modality during robot-assisted surgery. We propose modifying a da Vinci surgical instrument to realize optical coherence elastography (OCE) for quantitative elasticity estimation. The modified da Vinci instrument is equipped with piezoelectric elements for shear wave excitation and we employ fast optical coherence tomography (OCT) imaging to track propagating wave fields, which are directly related to biomechanical tissue properties. All high-voltage components are mounted at the proximal end outside the patient. We demonstrate that external excitation at the instrument shaft can effectively stimulate shear waves, even when considering damping. Comparing conventional and deep learning-based signal processing, resulting in mean absolute errors of 19.27 kPa and 6.29 kPa, respectively. These results illustrate that precise quantitative elasticity estimates can be obtained. We also demonstrate quantitative elasticity estimation on ex-vivo tissue samples of heart, liver and stomach, and show that the measurements can be used to distinguish soft and stiff tissue types.
    BibTeX:
    @inproceedings{neidhardt20240507,
      author = {M. Neidhardt and R. Mieling and S. Latus and M. Fischer and T. Maurer and A. Schlaefer},
      title = {A Modified da Vinci Surgical Instrument for OCE based Elasticity Estimation with Deep Learning.},
      booktitle = {Proceedings of the 10th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob 1024)},
      year = {2024},
      pages = {In Print},
      url = {https://doi.org/10.48550/arXiv.2403.09256},
      doi = {10.48550/arXiv.2403.09256}
    }
    
  • R. Opfer, J. Krüger, T. Buddenkotte, L. Spies, F. Behrendt, S. Schippling, R. Buchert (2024), "BrainLossNet: a fast, accurate and robust method to estimate brain volume loss from longitudinal MRI", International Journal of Computer Assisted Radiology and Surgery. Vol. 19(9),1763-1771. [Abstract] [BibTeX] [DOI] [URL]
  • Abstract: MRI-derived brain volume loss (BVL) is widely used as neurodegeneration marker. SIENA is state-of-the-art for BVL measurement, but limited by long computation time. Here we propose “BrainLossNet”, a convolutional neural network (CNN)-based method for BVL-estimation.
    BibTeX:
    @article{,
      author = {Opfer, Roland and Krüger, Julia and Buddenkotte, Thomas and Spies, Lothar and Behrendt, Finn and Schippling, Sven and Buchert, Ralph},
      title = {BrainLossNet: a fast, accurate and robust method to estimate brain volume loss from longitudinal MRI},
      journal = {International Journal of Computer Assisted Radiology and Surgery},
      year = {2024},
      volume = {19},
      number = {9},
      pages = {1763-1771},
      url = {https://doi.org/10.1007/s11548-024-03201-3},
      doi = {10.1007/s11548-024-03201-3}
    }
    
  • M. F. Rifet Ibrahim, T. Alkanat, M. Meijer, A. Schlaefer, P. Stelldinger (2024), "End-to-End Multi-Modal Tiny-CNN for Cardiovascular Monitoring on Sensor Patches", In 2024 IEEE International Conference on Pervasive Computing and Communications (PerCom)., March, 2024. ,18-24. [Abstract] [BibTeX] [DOI]
  • Abstract: The vast majority of cardiovascular diseases are avoidable or treatable by preventive measures and early de-tection. To efficiently detect early signs and risk factors, car-diovascular parameters can be monitored continuously with small sensor patches, which improve the comfort of patients. However, processing the sensor data is a challenging task with the demanding needs of robustness, reliability, performance and efficiency. The field of deep learning has tremendous potential to provide a way to analyze cardiovascular sensor data to detect anomalies which alleviates the workload of doctors for more effective data interpretation. In this work, we show the feasibility of applying deep learning for the classification of synchronized electrocardiogram and phonocardiogram recordings under very tight resource constraints. Our model employs an early fusion of data and uses convolutional layers to solve the problem of binary classification of anomalies. Our experiments show that our model matches the accuracy of the current state-of-the-art model on the “training-a” dataset of the Physionet Challenge 2016 database while being more than two orders of magnitude more efficient in memory footprint and compute cost. Further, we demonstrate the applicability of our model on edge devices, such as sensor patches, by estimating processor performance, power consumption, and silicon area.
    BibTeX:
    @inproceedings{10494450,
      author = {Rifet Ibrahim, Mustafa Fuad and Alkanat, Tunc and Meijer, Maurice and Schlaefer, Alexander and Stelldinger, Peer},
      title = {End-to-End Multi-Modal Tiny-CNN for Cardiovascular Monitoring on Sensor Patches},
      booktitle = {2024 IEEE International Conference on Pervasive Computing and Communications (PerCom)},
      year = {2024},
      pages = {18-24},
      doi = {10.1109/PerCom59722.2024.10494450}
    }
    
  • L. Schild, J. Zang, T. Flügel, D. Weiss, A. Schlaefer, S. Latus (2024), "Automated Detection of Palatal Deformations Using Deep Learning on Endoscopic Images", Current Directions in Biomedical Engineering. Vol. 10(1),65-68. [BibTeX] [DOI] [URL]
  • BibTeX:
    @article{SchildZangFlügelWeissSchlaeferLatus+2024+65+68,
      author = {Leona Schild and Jana Zang and Till Flügel and Deike Weiss and Alexander Schlaefer and Sarah Latus},
      title = {Automated Detection of Palatal Deformations Using Deep Learning on Endoscopic Images},
      journal = {Current Directions in Biomedical Engineering},
      year = {2024},
      volume = {10},
      number = {1},
      pages = {65--68},
      url = {https://doi.org/10.1515/cdbme-2024-0117},
      doi = {10.1515/cdbme-2024-0117}
    }
    
  • E. Sogancioglu, B. v. Ginneken, F. Behrendt, M. Bengs, A. Schlaefer, M. Radu, D. Xu, K. Sheng, F. Scalzo, E. Marcus, S. Papa, J. Teuwen, E. T. Scholten, S. Schalekamp, N. Hendrix, C. Jacobs, W. Hendrix, C. I. Sánchez, K. Murphy (2024), "Nodule Detection and Generation on Chest X-Rays: NODE21 Challenge", IEEE Transactions on Medical Imaging., Aug, 2024. Vol. 43(8),2839-2853. [Abstract] [BibTeX] [DOI]
  • Abstract: Pulmonary nodules may be an early manifestation of lung cancer, the leading cause of cancer-related deaths among both men and women. Numerous studies have established that deep learning methods can yield high-performance levels in the detection of lung nodules in chest X-rays. However, the lack of gold-standard public datasets slows down the progression of the research and prevents benchmarking of methods for this task. To address this, we organized a public research challenge, NODE21, aimed at the detection and generation of lung nodules in chest X-rays. While the detection track assesses state-of-the-art nodule detection systems, the generation track determines the utility of nodule generation algorithms to augment training data and hence improve the performance of the detection systems. This paper summarizes the results of the NODE21 challenge and performs extensive additional experiments to examine the impact of the synthetically generated nodule training images on the detection algorithm performance.
    BibTeX:
    @article{10479589,
      author = {Sogancioglu, E. and v. Ginneken, B. and Behrendt, F. and Bengs, M. and Schlaefer, A. and Radu, M. and Xu, D. and Sheng, K. and Scalzo, F. and Marcus, E. and Papa, S. and Teuwen, J. and Scholten, E. T. and Schalekamp, S. and Hendrix, N. and Jacobs, C. and Hendrix, W. and Sánchez, C. I. and Murphy, K.},
      title = {Nodule Detection and Generation on Chest X-Rays: NODE21 Challenge},
      journal = {IEEE Transactions on Medical Imaging},
      year = {2024},
      volume = {43},
      number = {8},
      pages = {2839-2853},
      doi = {10.1109/TMI.2024.3382042}
    }
    

    2023

    • F. Behrendt, M. Bengs, D. Bhattacharya, J. Krüger, R. Opfer, A. Schlaefer (2023), "A systematic approach to deep learning-based nodule detection in chest radiographs", Scientific Reports. Vol. 13(1),10120. [Abstract] [BibTeX] [DOI] [URL]
    • Abstract: Lung cancer is a serious disease responsible for millions of deaths every year. Early stages of lung cancer can be manifested in pulmonary lung nodules. To assist radiologists in reducing the number of overseen nodules and to increase the detection accuracy in general, automatic detection algorithms have been proposed. Particularly, deep learning methods are promising. However, obtaining clinically relevant results remains challenging. While a variety of approaches have been proposed for general purpose object detection, these are typically evaluated on benchmark data sets. Achieving competitive performance for specific real-world problems like lung nodule detection typically requires careful analysis of the problem at hand and the selection and tuning of suitable deep learning models. We present a systematic comparison of state-of-the-art object detection algorithms for the task of lung nodule detection. In this regard, we address the critical aspect of class imbalance and and demonstrate a data augmentation approach as well as transfer learning to boost performance. We illustrate how this analysis and a combination of multiple architectures results in state-of-the-art performance for lung nodule detection, which is demonstrated by the proposed model winning the detection track of the Node21 competition. The code for our approach is available at https://github.com/FinnBehrendt/node21-submit.
      BibTeX:
      @article{Behrendt2023,
        author = {Behrendt, Finn and Bengs, Marcel and Bhattacharya, Debayan and Krüger, Julia and Opfer, Roland and Schlaefer, Alexander},
        title = {A systematic approach to deep learning-based nodule detection in chest radiographs},
        journal = {Scientific Reports},
        year = {2023},
        volume = {13},
        number = {1},
        pages = {10120},
        url = {https://doi.org/10.1038/s41598-023-37270-2},
        doi = {10.1038/s41598-023-37270-2}
      }
      
    • F. Behrendt, D. Bhattacharya, J. Krüger, R. Roland, A. Schlaefer (2023), "Nodule Detection in Chest Radiographs with Unsupervised Pre-Trained Detection Transformers.", In 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI)., April, 2023. ,1-4. [Abstract] [BibTeX] [DOI]
    • Abstract: The detection of pulmonary nodules in chest x-rays is important for early observation and monitoring of lung cancer which is a major reason for death worldwide. However, detecting nodules from x-rays is challenging as nodules are easily overseen by radiologists. Convolutional neural networks (CNN) show promising results in supporting the clinical practice by automatic lung nodule detection and localization. Recently, attention-based Vision Transformers have been successfully applied in computer vision tasks. For object detection, end-to-end solutions have been proposed that reduce the amount of encoded prior knowledge and manual postprocessing. This is desirable particularly for medical applications where data domains often vary.In this work, we evaluate the application of Detection Transformers for nodule detection in chest x-rays and compare them against four CNN-based baseline object detection algorithms. To overcome the data inefficiency of Vision Transformers, we investigate the use of self-supervision from large-scale data sources. Our results demonstrate the high performance of transformer-based object detectors, by consistently outperforming CNN-based baselines on the Node21 data set. Furthermore, we demonstrate that self-supervision improves the detection performance without the costly requirement of collecting annotated data.
      BibTeX:
      @inproceedings{Behrendt2023b,
        author = {F. Behrendt and D. Bhattacharya and J. Krüger and R. Roland and A. Schlaefer},
        title = {Nodule Detection in Chest Radiographs with Unsupervised Pre-Trained Detection Transformers.},
        booktitle = {2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI)},
        year = {2023},
        pages = {1-4},
        doi = {10.1109/ISBI53787.2023.10230753}
      }
      
    • M. Bengs (2023), "Spatio-temporal deep learning for medical image sequences" [Abstract] [BibTeX] [DOI] [URL]
    • Abstract: In dieser Arbeit untersuchen und präsentieren wir räumlich-zeitliche tiefe Lernverfahren für die Analyse medizinischer Bildsequenzen. Wir konzentrieren uns auf zwei Anwendungsszenarien, die Bewegungsanalyse und die dynamische Elastographie, unter Verwendung der optischen Kohärenztomographie und des Ultraschalls als Bildgebungsmodalitäten. Unsere Ergebnisse zeigen, dass Deep Learning für die End-to-End-Verarbeitung von Sequenzen medizinischer Bilddaten, einschließlich Sequenzen volumetrischer Bilder, effektiv genutzt werden kann.
      BibTeX:
      @phdthesis{Bengs2023a,
        author = {Bengs, Marcel},
        title = {Spatio-temporal deep learning for medical image sequences},
        year = {2023},
        url = {https://hdl.handle.net/11420/44429},
        doi = {10.15480/882.8891}
      }
      
    • M. Bengs, J. Sprenger, S. Gerlach, M. Neidhardt, A. Schlaefer (2023), "Real-Time Motion Analysis With 4D Deep Learning for Ultrasound-Guided Radiotherapy", IEEE Transactions on Biomedical Engineering. ,1-10. [Abstract] [BibTeX] [DOI]
    • Abstract: Motion compensation in radiation therapy is a challenging scenario that requires estimating and forecasting motion of tissue structures to deliver the target dose. Ultrasound offers direct imaging of tissue in real-time and is considered for image guidance in radiation therapy. Recently, fast volumetric ultrasound has gained traction, but motion analysis with such high-dimensional data remains difficult. While deep learning could bring many advantages, such as fast data processing and high performance, it remains unclear how to process sequences of hundreds of image volumes efficiently and effectively. We present a 4D deep learning approach for real-time motion estimation and forecasting using long-term 4D ultrasound data. Using motion traces acquired during radiation therapy combined with various tissue types, our results demonstrate that long-term motion estimation can be performed markerless with a tracking error of 0.35± 0.2 mm and with an inference time of less than 5 ms. Also, we demonstrate forecasting directly from the image data up to 900 ms into the future. Overall, our findings highlight that 4D deep learning is a promising approach for motion analysis during radiotherapy.
      BibTeX:
      @article{Bengs2023,
        author = {Bengs, Marcel and Sprenger, Johanna and Gerlach, Stefan and Neidhardt, Maximilian and Schlaefer, Alexander},
        title = {Real-Time Motion Analysis With 4D Deep Learning for Ultrasound-Guided Radiotherapy},
        journal = {IEEE Transactions on Biomedical Engineering},
        year = {2023},
        pages = {1-10},
        doi = {10.1109/TBME.2023.3262422}
      }
      
    • D. Bhattacharya, F. Behrendt, B. T. Becker, D. Beyersdorff, E. Petersen, M. Petersen, B. Cheng, D. Eggert, C. Betz, A. S. Hoffmann, A. Schlaefer (2023), "Multiple instance ensembling for paranasal anomaly classification in the maxillary sinus", International Journal of Computer Assisted Radiology and Surgery. [Abstract] [BibTeX] [DOI] [URL]
    • Abstract: Paranasal anomalies are commonly discovered during routine radiological screenings and can present with a wide range of morphological features. This diversity can make it difficult for convolutional neural networks (CNNs) to accurately classify these anomalies, especially when working with limited datasets. Additionally, current approaches to paranasal anomaly classification are constrained to identifying a single anomaly at a time. These challenges necessitate the need for further research and development in this area.
      BibTeX:
      @article{Bhattacharya2023,
        author = {Bhattacharya, Debayan and Behrendt, Finn and Becker, Benjamin Tobias and Beyersdorff, Dirk and Petersen, Elina and Petersen, Marvin and Cheng, Bastian and Eggert, Dennis and Betz, Christian and Hoffmann, Anna Sophie and Schlaefer, Alexander},
        title = {Multiple instance ensembling for paranasal anomaly classification in the maxillary sinus},
        journal = {International Journal of Computer Assisted Radiology and Surgery},
        year = {2023},
        url = {https://doi.org/10.1007/s11548-023-02990-3},
        doi = {10.1007/s11548-023-02990-3}
      }
      
    • D. Bhattacharya, F. Behrendt, B. T. Becker, D. Beyersdorff, E. Petersen, M. Petersen, B. Cheng, D. Eggert, C. Betz, A. S. Hoffmann, A. Schlaefer (2023), "Unsupervised anomaly detection of paranasal anomalies in the maxillary sinus", In Medical Imaging 2023: Computer-Aided Diagnosis. Vol. 12465,124651B. SPIE. [Abstract] [BibTeX] [DOI] [URL]
    • Abstract: Deep learning (DL) algorithms can be used to automate paranasal anomaly detection from Magnetic Resonance Imaging (MRI). However, previous works relied on supervised learning techniques to distinguish between normal and abnormal samples. This method limits the type of anomalies that can be classified as the anomalies need to be present in the training data. Further, many data points from normal and anomaly class are needed for the model to achieve satisfactory classification performance. However, experienced clinicians can segregate between normal samples (healthy maxillary sinus) and anomalous samples (anomalous maxillary sinus) after looking at a few normal samples. We mimic the clinicians ability by learning the distribution of healthy maxillary sinuses using a 3D convolutional auto-encoder (cAE) and its variant, a 3D variational autoencoder (VAE) architecture and evaluate cAE and VAE for this task. Concretely, we pose the paranasal anomaly detection as an unsupervised anomaly detection problem. Thereby, we are able to reduce the labelling effort of the clinicians as we only use healthy samples during training. Additionally, we can classify any type of anomaly that differs from the training distribution. We train our 3D cAE and VAE to learn a latent representation of healthy maxillary sinus volumes using L1 reconstruction loss. During inference, we use the reconstruction error to classify between normal and anomalous maxillary sinuses. We extract sub-volumes from larger head and neck MRIs and analyse the effect of different fields of view on the detection performance. Finally, we report which anomalies are easiest and hardest to classify using our approach. Our results demonstrate the feasibility of unsupervised detection of paranasal anomalies from MRIs with an AUPRC of 85% and 80% for cAE and VAE, respectively.
      BibTeX:
      @inproceedings{Bhattacharya2023b,
        author = {Debayan Bhattacharya and Finn Behrendt and Benjamin Tobias Becker and Dirk Beyersdorff and Elina Petersen and Marvin Petersen and Bastian Cheng and Dennis Eggert and Christian Betz and Anna Sophie Hoffmann and Alexander Schlaefer},
        editor = {Khan M. Iftekharuddin and Weijie Chen},
        title = {Unsupervised anomaly detection of paranasal anomalies in the maxillary sinus},
        booktitle = {Medical Imaging 2023: Computer-Aided Diagnosis},
        publisher = {SPIE},
        year = {2023},
        volume = {12465},
        pages = {124651B},
        url = {https://doi.org/10.1117/12.2651525},
        doi = {10.1117/12.2651525}
      }
      
    • D. Bhattacharya, S. Latus, F. Behrendt, F. Thimm, D. Eggert, C. Betz, A. Schlaefer (2023), "Tissue Classification During Needle Insertion Using Self-Supervised Contrastive Learning and Optical Coherence Tomography" [Abstract] [BibTeX]
    • Abstract: Needle positioning is essential for various medical applications such as epidural anaesthesia. Physicians rely on their instincts while navigating the needle in epidural spaces. Thereby, identifying the tissue structures may be helpful to the physician as they can provide additional feedback in the needle insertion process. To this end, we propose a deep neural network that classifies the tissues from the phase and intensity data of complex OCT signals acquired at the needle tip. We investigate the performance of the deep neural network in a limited labelled dataset scenario and propose a novel contrastive pretraining strategy that learns invariant representation for phase and intensity data. We show that with 10% of the training set, our proposed pretraining strategy helps the model achieve an F1 score of 0.84 whereas the model achieves an F1 score of 0.60 without it. Further, we analyse the importance of phase and intensity individually towards tissue classification
      BibTeX:
      @misc{Bhattacharya2023a,
        author = {Debayan Bhattacharya and Sarah Latus and Finn Behrendt and Florin Thimm and Dennis Eggert and Christian Betz and Alexander Schlaefer},
        title = {Tissue Classification During Needle Insertion Using Self-Supervised Contrastive Learning and Optical Coherence Tomography},
        year = {2023}
      }
      
    • D. Eggert, D. Bhattacharya, A. Felicio-Briegel, V. Volgger, A. Schlaefer, C. Betz (2023), "Deep-learning-based image acquisition support tool for endoscopic narrow Band Imaging of the Larynx", Laryngo-Rhino-Otologie. Vol. 102(S 02) Georg Thieme Verlag. [Abstract] [BibTeX] [DOI] [URL]
    • Abstract: Narrow band imaging (NBI) enables a contrast-enhanced imaging of mucosal blow-vessels. Nowadays NBI is a standard feature in many endoscopes. NBI is increasingly being applies in clinical investigations of the head-neck region. Using flexible laryngoscopes different laryngeal lesions can be investigated in awake patients. NBI enables a better recognition and differentiation of different pathologies than white light endoscopy.
      BibTeX:
      @article{Eggert2023,
        author = {Eggert, Dennis and Bhattacharya, Debayan and Felicio-Briegel, Axelle and Volgger, Veronika and Schlaefer, Alexander and Betz, Christian},
        title = {Deep-learning-based image acquisition support tool for endoscopic narrow Band Imaging of the Larynx},
        journal = {Laryngo-Rhino-Otologie},
        publisher = {Georg Thieme Verlag},
        year = {2023},
        volume = {102},
        number = {S 02},
        url = {http://www.thieme-connect.com/products/ejournals/abstract/10.1055/s-0043-1767104},
        doi = {10.1055/s-0043-1767104}
      }
      
    • D. Eggert, D. Bhattacharya, A. Felicio-Briegel, V. Volgger, A. Schlaefer, C. Betz (2023), "Deep-Learning-basierte Aufnahmeunterstützung für endoskopisches Narrow Band Imaging des Larynx", Laryngo-Rhino-Otologie. Vol. 102(S 02) Georg Thieme Verlag. [Abstract] [BibTeX] [DOI] [URL]
    • Abstract: Narrow Band Imaging (NBI) ermöglicht die kontrastverstärkte Darstellung von Blutgefäßen in Schleimhäuten. NBI gehört in vielen Endoskopen bereits zur Standardausstattung und findet im Kopf-Hals-Bereich immer stärkere Anwendung. Mit Hilfe von flexiblen Laryngoskopen können verschiedene Schleimhautläsionen des oberen Luft-Speiseweges im Wachzustand untersucht werden. Pathologien können dabei meist besser als bei konventioneller Weißlichtendoskopie erkannt werden. Für eine gute Beurteilbarkeit der NBI-Bilddaten ist eine gute Bildqualität essentiell. Nur wenn die oberflächlichen Blutgefäße klar erkennbar sind, können NBI-Aufnahmen sinnvoll ausgewertet werden.
      BibTeX:
      @article{Eggert2023a,
        author = {Eggert, Dennis and Bhattacharya, Debayan and Felicio-Briegel, Axelle and Volgger, Veronika and Schlaefer, Alexander and Betz, Christian},
        title = {Deep-Learning-basierte Aufnahmeunterstützung für endoskopisches Narrow Band Imaging des Larynx},
        journal = {Laryngo-Rhino-Otologie},
        publisher = {Georg Thieme Verlag},
        year = {2023},
        volume = {102},
        number = {S 02},
        url = {http://www.thieme-connect.com/products/ejournals/abstract/10.1055/s-0043-1766512},
        doi = {10.1055/s-0043-1766512}
      }
      
    • S. Gerlach, T. Hofmann, C. Fürweger, A. Schlaefer (2023), "Towards fast adaptive replanning by constrained reoptimization for intra-fractional non-periodic motion during robotic SBRT", Medical Physics. Vol. 50(7),4613-4622. [Abstract] [BibTeX] [DOI] [URL]
    • Abstract: Abstract Background Periodic and slow target motion is tracked by synchronous motion of the treatment beams in robotic stereotactic body radiation therapy (SBRT). However, spontaneous, non-periodic displacement or drift of the target may completely change the treatment geometry. Simple motion compensation is not sufficient to guarantee the best possible treatment, since relative motion between the target and organs at risk (OARs) can cause substantial deviations of dose in the OARs. This is especially evident when considering the temporally heterogeneous dose delivery by many focused beams which is typical for robotic SBRT. Instead, a reoptimization of the remaining treatment plan after a large target motion during the treatment could potentially reduce the actually delivered dose to OARs and improve target coverage. This reoptimization task, however, is challenging due to time constraints and limited human supervision. Purpose To study the detrimental effect of spontaneous target motion relative to surrounding OARs on the delivered dose distribution and to analyze how intra-fractional constrained replanning could improve motion compensated robotic SBRT of the prostate. Methods We solve the inverse planning problem by optimizing a linear program. When considering intra-fractional target motion resulting in a change of geometry, we adapt the linear program to account for the changed dose coefficients and delivered dose. We reduce the problem size by only reweighting beams from the reference treatment plan without motion. For evaluation we simulate target motion and compare our approach for intra-fractional replanning to the conventional compensation by synchronous beam motion. Results are generated retrospectively on data of 50 patients. Results Our results show that reoptimization can on average retain or improve coverage in case of target motion compared to the reference plan without motion. Compared to the conventional compensation, coverage is improved from 87.83 % to 94.81 % for large target motion. Our approach for reoptimization ensures fixed upper constraints on the dose even after motion, enabling safer intra-fraction adaption, compared to conventional motion compensation where overdosage in OARs can lead to 21.79 % higher maximum dose than planned. With an average reoptimization time of 6 s for 200 reoptimized beams our approach shows promising performance for intra-fractional application. Conclusions We show that intra-fractional constrained reoptimization for adaption to target motion can improve coverage compared to the conventional approach of beam translation while ensuring that upper dose constraints on VOIs are not violated.
      BibTeX:
      @article{Gerlach2023,
        author = {Gerlach, Stefan and Hofmann, Theresa and Fürweger, Christoph and Schlaefer, Alexander},
        title = {Towards fast adaptive replanning by constrained reoptimization for intra-fractional non-periodic motion during robotic SBRT},
        journal = {Medical Physics},
        year = {2023},
        volume = {50},
        number = {7},
        pages = {4613-4622},
        url = {https://aapm.onlinelibrary.wiley.com/doi/abs/10.1002/mp.16381},
        doi = {10.1002/mp.16381}
      }
      
    • S. Grube, M. Bengs, M. Neidhardt, S. Latus, A. Schlaefer (2023), "Ultrasound shear wave velocity estimation in a small field of view via spatio-temporal deep learning", In Medical Imaging 2023: Image Processing. Vol. 12464,1246425. SPIE. [Abstract] [BibTeX] [DOI] [URL]
    • Abstract: A change in tissue stiffness can indicate pathological diseases and therefore supports physicians in diagnosis and treatment. Ultrasound shear wave elastography (US-SWEI) can be used to quantify tissue stiffness by estimating the velocity of propagating shear waves. While a linear US probe with a lateral imaging width of approximately 40 mm is commonly used and US-SWEI has been successfully demonstrated, some clinical applications, such as laparoscopic or endoscopic interventions, require small probes. This limits the lateral image width to the millimeter range and reduces the available information in the US images substantially. In this work, we systematically analyze the effect of a reduced lateral imaging width for shear wave velocity estimation using the conventional time-of-flight (ToF) method and spatio-temporal convolutional neural networks (ST-CNNs). For our study, we use tissue mimicking gelatin phantoms with varying stiffness and resulting shear wave velocities in the range from 3.63 m/s to 7.09 m/s. We find that lateral imaging width has a substantial impact on the performance of ToF, while shear wave velocity estimation with ST-CNNs remains robust. Our results show that shear wave velocity estimation with ST-CNN can even be performed for a lateral imaging width of 2.1 mm resulting in a mean absolute error of 0.81 ± 0.61 m/s.
      BibTeX:
      @inproceedings{Grube2023,
        author = {Sarah Grube and Marcel Bengs and Maximilian Neidhardt and Sarah Latus and Alexander Schlaefer},
        editor = {Olivier Colliot and Ivana Išgum},
        title = {Ultrasound shear wave velocity estimation in a small field of view via spatio-temporal deep learning},
        booktitle = {Medical Imaging 2023: Image Processing},
        publisher = {SPIE},
        year = {2023},
        volume = {12464},
        pages = {1246425},
        url = {https://doi.org/10.1117/12.2653833},
        doi = {10.1117/12.2653833}
      }
      
    • A. S. Hoffmann, D. Bhattacharya, B. Becker, D. Beyersdorff, E. Petersen, M. Petersen, D. Eggert, A. Schläfer, C. Betz (2023), "Machbarkeitsanalyse eines automatisierten KI-basierten Klassifikationssystems zur Erkennung von Kieferhöhlenbefunden", Laryngo-Rhino-Otologie. Vol. 102(S 02) Georg Thieme Verlag. [Abstract] [BibTeX] [DOI] [URL]
    • Abstract: Studien zeigen eine erhöhte Inzidenz von Verschattungen der Nasennebenhöhlen im cMRT ohne entsprechende Symptomatik. Dabei ist es von Interesse, ob abklärungsbedürftigte Befunde vorliegen. Der Einsatz von KI-basierten Methoden kann die Erkennung von Verschattungen automatisieren und dadurch die Arbeitsbelastung von ärzten reduzieren. In dieser Arbeit wurde eine Methode zur KI-basierten Klassifikation von Kieferhöhlenverschattungen entwickelt.
      BibTeX:
      @article{Hoffmann2023a,
        author = {Hoffmann, Anna Sophie and Bhattacharya, Debayan and Becker, Benjamin and Beyersdorff, Dirk and Petersen, Elina and Petersen, Marvin and Eggert, Dennis and Schläfer, Alexander and Betz, Christian},
        title = {Machbarkeitsanalyse eines automatisierten KI-basierten Klassifikationssystems zur Erkennung von Kieferhöhlenbefunden},
        journal = {Laryngo-Rhino-Otologie},
        publisher = {Georg Thieme Verlag},
        year = {2023},
        volume = {102},
        number = {S 02},
        url = {http://www.thieme-connect.com/products/ejournals/abstract/10.1055/s-0043-1766502},
        doi = {10.1055/s-0043-1766502}
      }
      
    • S. A. Hoffmann, D. Bhattacharya, B. Becker, D. Beyersdorff, E. Petersen, M. Petersen, D. Eggert, A. Schläfer, C. Betz (2023), "Analysing the feasibility of an automated AI-based classifier for detecting paranasal anomalies in the maxillary sinus", Laryngo-Rhino-Otologie. Vol. 102(S 02) Georg Thieme Verlag. [Abstract] [BibTeX] [DOI] [URL]
    • Abstract: Large scale population studies have been performed to analyse the rate of finding sinus opacities in cranial MRIs. It is of interest whether there are findings requiring clarification. Using AI-based methods can automate the detection of the sinus opacities and reduce the workload of clinicians. In this work, a method for AI-based classification was developed in order to automatically recognise paranasal sinus opacities.
      BibTeX:
      @article{Hoffmann2023,
        author = {Hoffmann, Sophie Anna and Bhattacharya, Debayan and Becker, Benjamin and Beyersdorff, Dirk and Petersen, Elina and Petersen, Marvin and Eggert, Dennis and Schläfer, Alexander and Betz, Christian},
        title = {Analysing the feasibility of an automated AI-based classifier for detecting paranasal anomalies in the maxillary sinus},
        journal = {Laryngo-Rhino-Otologie},
        publisher = {Georg Thieme Verlag},
        year = {2023},
        volume = {102},
        number = {S 02},
        url = {http://www.thieme-connect.com/products/ejournals/abstract/10.1055/s-0043-1767093},
        doi = {10.1055/s-0043-1767093}
      }
      
    • I. Kniep, R. Mieling, M. Gerling, A. Schlaefer, A. Heinemann, B. Ondruschka (2023), "Bayesian Reconstruction Algorithms for Low-Dose Computed Tomography Are Not Yet Suitable in Clinical Context.", Journal of Imaging. Vol. 9.((9),) [Abstract] [BibTeX] [DOI] [URL]
    • Abstract: Computed tomography (CT) is a widely used examination technique that usually requires a compromise between image quality and radiation exposure. Reconstruction algorithms aim to reduce radiation exposure while maintaining comparable image quality. Recently, unsupervised deep learning methods have been proposed for this purpose. In this study, a promising sparse-view reconstruction method (posterior temperature optimized Bayesian inverse model; POTOBIM) is tested for its clinical applicability. For this study, 17 whole-body CTs of deceased were performed. In addition to POTOBIM, reconstruction was performed using filtered back projection (FBP). An evaluation was conducted by simulating sinograms and comparing the reconstruction with the original CT slice for each case. A quantitative analysis was performed using peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). The quality was assessed visually using a modified Ludewig’s scale. In the qualitative evaluation, POTOBIM was rated worse than the reference images in most cases. A partially equivalent image quality could only be achieved with 80 projections per rotation. Quantitatively, POTOBIM does not seem to benefit from more than 60 projections. Although deep learning methods seem suitable to produce better image quality, the investigated algorithm (POTOBIM) is not yet suitable for clinical routine.
      BibTeX:
      @article{Kniep2023,
        author = {I. Kniep and R. Mieling and M. Gerling and A. Schlaefer and A. Heinemann and B. Ondruschka},
        title = {Bayesian Reconstruction Algorithms for Low-Dose Computed Tomography Are Not Yet Suitable in Clinical Context.},
        journal = {Journal of Imaging},
        year = {2023},
        volume = {9.},
        number = {(9),},
        url = {https://www.mdpi.com/2313-433X/9/9/170},
        doi = {10.3390/jimaging9090170}
      }
      
    • S. Kolibová, E. Wölfel, H. Hemmatian, P. Milovanovic, H. Mushumba, B. Wulff, M. Neidhardt, K. Püschel, A. Failla, A. Vlug, A. Schlaefer, B. Ondruschka, M. Amling, L. Hofbauer, M. Rauner, B. Busse, K. Jähn-Rickert (2023), "Osteocyte apoptosis and cellular micropetrosis signify skeletal aging in type 1 diabetes", Acta Biomaterialia., 03, 2023. [Abstract] [BibTeX] [DOI]
    • Abstract: Bone fragility is a profound complication of type 1 diabetes mellitus (T1DM), increasing patient morbidity. Within the mineralized bone matrix, osteocytes build a mechanosensitive network that orchestrates bone remodeling; thus, osteocyte viability is crucial for maintaining bone homeostasis. In human cortical bone specimens from individuals with T1DM, we found signs of accelerated osteocyte apoptosis and local mineralization of osteocyte lacunae (micropetrosis) compared with samples from age-matched controls. Such morphological changes were seen in the relatively young osteonal bone matrix on the periosteal side, and micropetrosis coincided with microdamage accumulation, implying that T1DM drives local skeletal aging and thereby impairs the biomechanical competence of the bone tissue. The consequent dysfunction of the osteocyte network hampers bone remodeling and decreases bone repair mechanisms, potentially contributing to the enhanced fracture risk seen in individuals with T1DM. STATEMENT OF SIGNIFICANCE: Type 1 diabetes mellitus (T1DM) is a chronic autoimmune disease that causes hyperglycemia. Increased bone fragility is one of the complications associated with T1DM. Our latest study on T1DM-affected human cortical bone identified the viability of osteocytes, the primary bone cells, as a potentially critical factor in T1DM-bone disease. We linked T1DM with increased osteocyte apoptosis and local accumulation of mineralized lacunar spaces and microdamage. Such structural changes in bone tissue suggest that T1DM speeds up the adverse effects of aging, leading to the premature death of osteocytes and potentially contributing to diabetes-related bone fragility.
      BibTeX:
      @article{Kolibova2023,
        author = {Kolibová, Sofie and Wölfel, Eva and Hemmatian, Haniyeh and Milovanovic, Petar and Mushumba, Herbert and Wulff, Birgit and Neidhardt, Maximilian and Püschel, Klaus and Failla, Antonio and Vlug, Annegreet and Schlaefer, Alexander and Ondruschka, Benjamin and Amling, Michael and Hofbauer, Lorenz and Rauner, Martina and Busse, Björn and Jähn-Rickert, Katharina},
        title = {Osteocyte apoptosis and cellular micropetrosis signify skeletal aging in type 1 diabetes},
        journal = {Acta Biomaterialia},
        year = {2023},
        doi = {10.1016/j.actbio.2023.02.037}
      }
      
    • S. Latus, S. Grube, T. Eixmann, M. Neidhardt, S. Gerlach, R. Mieling, G. Hüttmann, M. Lutz, A. Schlaefer (2023), "A Miniature Dual-Fiber Probe for Quantitative Optical Coherence Elastography", IEEE Transactions on Biomedical Engineering., Nov, 2023. Vol. 70(11),3064-3072. [Abstract] [BibTeX] [DOI]
    • Abstract: Objective: Optical coherence elastography (OCE) allows for high resolution analysis of elastic tissue properties. However, due to the limited penetration of light into tissue, miniature probes are required to reach structures inside the body, e.g., vessel walls. Shear wave elastography relates shear wave velocities to quantitative estimates of elasticity. Generally, this is achieved by measuring the runtime of waves between two or multiple points. For miniature probes, optical fibers have been integrated and the runtime between the point of excitation and a single measurement point has been considered. This approach requires precise temporal synchronization and spatial calibration between excitation and imaging. Methods: We present a miniaturized dual-fiber OCE probe of 1 ,mm diameter allowing for robust shear wave elastography. Shear wave velocity is estimated between two optics and hence independent of wave propagation between excitation and imaging. We quantify the wave propagation by evaluating either a single or two measurement points. Particularly, we compare both approaches to ultrasound elastography. Results: Our experimental results demonstrate that quantification of local tissue elasticities is feasible. For homogeneous soft tissue phantoms, we obtain mean deviations of 0.15 ,ms^-1 and 0.02 ,ms^-1 for single-fiber and dual-fiber OCE, respectively. In inhomogeneous phantoms, we measure mean deviations of up to 0.54 ,ms^-1 and 0.03 ,ms^-1 for single-fiber and dual-fiber OCE, respectively. Conclusion: We present a dual-fiber OCE approach that is much more robust in inhomogeneous tissues. Moreover, we demonstrate the feasibility of elasticity quantification in ex-vivo coronary arteries. Significance: This study introduces an approach for robust elasticity quantification from within the tissue.
      BibTeX:
      @article{Latus2023,
        author = {Latus, Sarah and Grube, Sarah and Eixmann, Tim and Neidhardt, Maximilian and Gerlach, Stefan and Mieling, Robin and Hüttmann, Gereon and Lutz, Matthias and Schlaefer, Alexander},
        title = {A Miniature Dual-Fiber Probe for Quantitative Optical Coherence Elastography},
        journal = {IEEE Transactions on Biomedical Engineering},
        year = {2023},
        volume = {70},
        number = {11},
        pages = {3064-3072},
        doi = {10.1109/TBME.2023.3275539}
      }
      
    • R. Mieling, S. Latus, M. Fischer, F. Behrendt, A. Schlaefer (2023), "Optical Coherence Elastography Needle for Biomechanical Characterization of Deep Tissue", In Medical Image Computing and Computer Assisted Intervention - MICCAI 2023. Cham ,607-617. Springer Nature Switzerland. [Abstract] [BibTeX] [DOI]
    • Abstract: Compression-based optical coherence elastography (OCE) enables characterization of soft tissue by estimating elastic properties. However, previous probe designs have been limited to surface applications. We propose a bevel tip OCE needle probe for percutaneous insertions, where biomechanical characterization of deep tissue could enable precise needle placement, e.g., in prostate biopsy. We consider a dual-fiber OCE needle probe that provides estimates of local strain and load at the tip. Using a novel setup, we simulate deep tissue indentations where frictional forces and bulk sample displacement can affect biomechanical characterization. Performing surface and deep tissue indentation experiments, we compare our approach with external force and needle position measurements at the needle shaft. We consider two tissue mimicking materials simulating healthy and cancerous tissue and demonstrate that our probe can be inserted into deep tissue layers. Compared to surface indentations, external force-position measurements are strongly affected by frictional forces and bulk displacement and show a relative error of 49.2% and 42.4% for soft and stiff phantoms, respectively. In contrast, quantitative OCE measurements show a reduced relative error of 26.4% and 4.9% for deep indentations of soft and stiff phantoms, respectively. Finally, we demonstrate that the OCE measurements can be used to effectively discriminate the tissue mimicking phantoms.
      BibTeX:
      @inproceedings{Mieling2023a,
        author = {Mieling, Robin and Latus, Sarah and Fischer, Martin and Behrendt, Finn and Schlaefer, Alexander},
        editor = {Greenspan, Hayit and Madabhushi, Anant and Mousavi, Parvin and Salcudean, Septimiu and Duncan, James and Syeda-Mahmood, Tanveer and Taylor, Russell},
        title = {Optical Coherence Elastography Needle for Biomechanical Characterization of Deep Tissue},
        booktitle = {Medical Image Computing and Computer Assisted Intervention - MICCAI 2023},
        publisher = {Springer Nature Switzerland},
        year = {2023},
        pages = {607-617},
        doi = {10.1007/978-3-031-43996-4_58}
      }
      
    • R. Mieling, M. Neidhardt, S. Latus, C. Stapper, S. Gerlach, I. Kniep, A. Heinemann, B. Ondruschka, A. Schlaefer (2023), "Collaborative Robotic Biopsy with Trajectory Guidance and Needle Tip Force Feedback.", In 2023 IEEE International Conference on Robotics and Automation (ICRA)., May, 2023. ,6893-6900. [Abstract] [BibTeX] [DOI]
    • Abstract: The diagnostic value of biopsies is highly dependent on the placement of needles. Robotic trajectory guidance has been shown to improve needle positioning, but feedback for real-time navigation is limited. Haptic display of needle tip forces can provide rich feedback for needle navigation by enabling localization of tissue structures along the insertion path. We present a collaborative robotic biopsy system that combines trajectory guidance with kinesthetic feedback to assist the physician in needle placement. The robot aligns the needle while the insertion is performed in collaboration with a medical expert who controls the needle position on site. We present a needle design that senses forces at the needle tip based on optical coherence tomography and machine learning for real-time data processing. Our robotic setup allows operators to sense deep tissue interfaces independent of frictional forces to improve needle placement relative to a desired target structure. We first evaluate needle tip force sensing in ex-vivo tissue in a phantom study. We characterize the tip forces during insertions with constant velocity and demonstrate the ability to detect tissue interfaces in a collaborative user study. Participants are able to detect 91 percent of ex-vivo tissue interfaces based on needle tip force feedback alone. Finally, we demonstrate that even smaller, deep target structures can be accurately sampled by performing post-mortem in situ biopsies of the pancreas.
      BibTeX:
      @inproceedings{Mieling2023,
        author = {R. Mieling and M. Neidhardt and S. Latus and C. Stapper and S. Gerlach and I. Kniep and A. Heinemann and B. Ondruschka and A. Schlaefer},
        title = {Collaborative Robotic Biopsy with Trajectory Guidance and Needle Tip Force Feedback.},
        booktitle = {2023 IEEE International Conference on Robotics and Automation (ICRA)},
        year = {2023},
        pages = {6893-6900},
        doi = {10.1109/ICRA48891.2023.10161377}
      }
      
    • M. Neidhardt, R. Mieling, M. Bengs, A. Schlaefer (2023), "Optical force estimation for interactions between tool and soft tissues", Scientific Reports. Vol. 13(1),506. [Abstract] [BibTeX] [DOI] [URL]
    • Abstract: Robotic assistance in minimally invasive surgery offers numerous advantages for both patient and surgeon. However, the lack of force feedback in robotic surgery is a major limitation, and accurately estimating tool-tissue interaction forces remains a challenge. Image-based force estimation offers a promising solution without the need to integrate sensors into surgical tools. In this indirect approach, interaction forces are derived from the observed deformation, with learning-based methods improving accuracy and real-time capability. However, the relationship between deformation and force is determined by the stiffness of the tissue. Consequently, both deformation and local tissue properties must be observed for an approach applicable to heterogeneous tissue. In this work, we use optical coherence tomography, which can combine the detection of tissue deformation with shear wave elastography in a single modality. We present a multi-input deep learning network for processing of local elasticity estimates and volumetric image data. Our results demonstrate that accounting for elastic properties is critical for accurate image-based force estimation across different tissue types and properties. Joint processing of local elasticity information yields the best performance throughout our phantom study. Furthermore, we test our approach on soft tissue samples that were not present during training and show that generalization to other tissue properties is possible.
      BibTeX:
      @article{Neidhardt2023,
        author = {Neidhardt, Maximilian and Mieling, Robin and Bengs, Marcel and Schlaefer, Alexander},
        title = {Optical force estimation for interactions between tool and soft tissues},
        journal = {Scientific Reports},
        year = {2023},
        volume = {13},
        number = {1},
        pages = {506},
        url = {https://doi.org/10.1038/s41598-022-27036-7},
        doi = {10.1038/s41598-022-27036-7}
      }
      
    • C. Stapper, S. Gerlach, T. Hofmann, C. Fürweger, A. Schlaefer (2023), "Automated isocenter optimization approach for treatment planning for gyroscopic radiosurgery", Medical Physics. Vol. 50(8),5212-5221. [Abstract] [BibTeX] [DOI] [URL]
    • Abstract: Abstract Background Radiosurgery is a well-established treatment for various intracranial tumors. In contrast to other established radiosurgery platforms, the new ZAP-X® allows for self-shielding gyroscopic radiosurgery. Here, treatment beams with variable beam-on times are targeted towards a small number of isocenters. The existing planning framework relies on a heuristic based on random selection or manual selection of isocenters, which often leads to a higher plan quality in clinical practice. Purpose The purpose of this work is to study an improved approach for radiosurgery treatment planning, which automatically selects the isocenter locations for the treatment of brain tumors and diseases in the head and neck area using the new system ZAP-X®. Methods We propose a new method to automatically obtain the locations of the isocenters, which are essential in gyroscopic radiosurgery treatment planning. First, an optimal treatment plan is created based on a randomly selected nonisocentric candidate beam set. The intersections of the resulting subset of weighted beams are then clustered to find isocenters. This approach is compared to sphere-packing, random selection, and selection by an expert planner for generating isocenters. We retrospectively evaluate plan quality on 10 acoustic neuroma cases. Results Isocenters acquired by the method of clustering result in clinically viable plans for all 10 test cases. When using the same number of isocenters, the clustering approach improves coverage on average by 31 percentage points compared to random selection, 15 percentage points compared to sphere packing and 2 percentage points compared to the coverage achieved with the expert selected isocenters. The automatic determination of location and number of isocenters leads, on average, to a coverage of 97 ± 3% with a conformity index of 1.22 ± 0.22, while using 2.46 ± 3.60 fewer isocenters than manually selected. In terms of algorithm performance, all plans were calculated in less than 2 min with an average runtime of 75 ± 25 s. Conclusions This study demonstrates the feasibility of an automatic isocenter selection by clustering in the treatment planning process with the ZAP-X® system. Even in complex cases where the existing approaches fail to produce feasible plans, the clustering method generates plans that are comparable to those produced by expert selected isocenters. Therefore, our approach can help reduce the effort and time required for treatment planning in gyroscopic radiosurgery.
      BibTeX:
      @article{Stapper2023,
        author = {Stapper, Carolin and Gerlach, Stefan and Hofmann, Theresa and Fürweger, Christoph and Schlaefer, Alexander},
        title = {Automated isocenter optimization approach for treatment planning for gyroscopic radiosurgery},
        journal = {Medical Physics},
        year = {2023},
        volume = {50},
        number = {8},
        pages = {5212-5221},
        url = {https://aapm.onlinelibrary.wiley.com/doi/abs/10.1002/mp.16436},
        doi = {10.1002/mp.16436}
      }
      
    • M. Stender, J. Ohlsen, H. Geisler, A. Chabchoub, N. Hoffmann, A. Schlaefer (2023), "Up-Net: a generic deep learning-based time stepper for parameterized spatio-temporal dynamics", Computational Mechanics. Vol. 71(6),1227-1249. [Abstract] [BibTeX] [DOI] [URL]
    • Abstract: In the age of big data availability, data-driven techniques have been proposed recently to compute the time evolution of spatio-temporal dynamics. Depending on the required a priori knowledge about the underlying processes, a spectrum of black-box end-to-end learning approaches, physics-informed neural networks, and data-informed discrepancy modeling approaches can be identified. In this work, we propose a purely data-driven approach that uses fully convolutional neural networks to learn spatio-temporal dynamics directly from parameterized datasets of linear spatio-temporal processes. The parameterization allows for data fusion of field quantities, domain shapes, and boundary conditions in the proposed U^p-Net architecture. Multi-domain U^p-Net models, therefore, can generalize to different scenes, initial conditions, domain shapes, and domain sizes without requiring re-training or physical priors. Numerical experiments conducted on a universal and two-dimensional wave equation and the transient heat equation for validation purposes show that the proposed U^p-Net outperforms classical U-Net and conventional encoder-decoder architectures of the same complexity. Owing to the scene parameterization, the U^p-Net models learn to predict refraction and reflections arising from domain inhomogeneities and boundaries. Generalization properties of the model outside the physical training parameter distributions and for unseen domain shapes are analyzed. The deep learning flow map models are employed for long-term predictions in a recursive time-stepping scheme, indicating the potential for data-driven forecasting tasks. This work is accompanied by an open-sourced code.
      BibTeX:
      @article{Stender2023,
        author = {Stender, Merten and Ohlsen, Jakob and Geisler, Hendrik and Chabchoub, Amin and Hoffmann, Norbert and Schlaefer, Alexander},
        title = {Up-Net: a generic deep learning-based time stepper for parameterized spatio-temporal dynamics},
        journal = {Computational Mechanics},
        year = {2023},
        volume = {71},
        number = {6},
        pages = {1227--1249},
        url = {https://doi.org/10.1007/s00466-023-02295-x},
        doi = {10.1007/s00466-023-02295-x}
      }
      

      2022

      • F. Behrendt, M. Bengs, D. Bhattacharya, J. Krüger, R. Opfer, A. Schlaefer (2022), "Capturing Inter-Slice Dependencies of 3D Brain MRI-Scans for Unsupervised Anomaly Detection.", In Medical Imaging with Deep Learning. [BibTeX] [URL]
      • BibTeX:
        @inproceedings{Behrendt2022,
          author = {F. Behrendt and M. Bengs and D. Bhattacharya and J. Krüger and R. Opfer and A. Schlaefer},
          title = {Capturing Inter-Slice Dependencies of 3D Brain MRI-Scans for Unsupervised Anomaly Detection.},
          booktitle = {Medical Imaging with Deep Learning},
          year = {2022},
          url = {https://openreview.net/forum?id=db8wDgKH4p4}
        }
        
      • F. Behrendt, M. Bengs, F. Rogge, J. Krüger, R. Opfer, A. Schlaefer (2022), "Unsupervised Anomaly Detection in 3D Brain MRI Using Deep Learning with Impured Training Data.", In 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI). ,1-4. [Abstract] [BibTeX] [DOI]
      • Abstract: The detection of lesions in magnetic resonance imaging MRI-scans of human brains remains challenging, timeconsuming and error-prone. Recently, unsupervised anomaly detection UAD methods have shown promising results for this task. These methods rely on training data sets that solely contain healthy samples. Compared to supervised approaches, this significantly reduces the need for an extensive amount of labeled training data. However, data labelling remains error-prone. We study how unhealthy samples within the training data affect anomaly detection performance for brain MRI-scans. For our evaluations, we consider autoencoders AE as a well-established baseline method for UAD. We systematically evaluate the effect of impured training data by injecting different quantities of unhealthy samples to our training set of healthy samples. We evaluate a method to identify falsely labeled samples directly during training based on the reconstruction error of the AE. Our results show that training with impured data decreases the UAD performance notably even with few falsely labeled samples. By performing outlier removal directly during training based on the reconstruction-loss, we demonstrate that falsely labeled data can be detected and that this mitigates the effect of falsely labeled data.
        BibTeX:
        @inproceedings{Behrendt2022a,
          author = {F. Behrendt and M. Bengs and F. Rogge and J. Krüger and R. Opfer and A. Schlaefer},
          title = {Unsupervised Anomaly Detection in 3D Brain MRI Using Deep Learning with Impured Training Data.},
          booktitle = {2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)},
          year = {2022},
          pages = {1-4},
          doi = {10.1109/ISBI52829.2022.9761443}
        }
        
      • F. Behrendt, D. Bhattacharya, J. Krüger, R. Opfer, A. Schlaefer (2022), "Data-Efficient Vision Transformers for Multi-Label Disease Classification on Chest Radiographs", Current Directions in Biomedical Engineering. Vol. 8(1),34-37. [Abstract] [BibTeX] [DOI] [URL]
      • Abstract: Radiographs are a versatile diagnostic tool for thedetection and assessment of pathologies, for treatment plan-ning or for navigation and localization purposes in clinical in-terventions. However, their interpretation and assessment byradiologists can be tedious and error-prone. Thus, a wide va-riety of deep learning methods have been proposed to supportradiologists interpreting radiographs.Mostly, these approaches rely on convolutional neural net-works (CNN) to extract features from images. Especially forthe multi-label classification of pathologies on chest radio-graphs (Chest X-Rays, CXR), CNNs have proven to be wellsuited. On the Contrary, Vision Transformers (ViTs) have notbeen applied to this task despite their high classification per-formance on generic images and interpretable local saliencymaps which could add value to clinical interventions. ViTs donot rely on convolutions but on patch-based self-attention andin contrast to CNNs, no prior knowledge of local connectivityis present. While this leads to increased capacity, ViTs typi-cally require an excessive amount of training data which rep-resents a hurdle in the medical domain as high costs are asso-ciated with collecting large medical data sets.In this work, we systematically compare the classification per-formance of ViTs and CNNs for different data set sizes andevaluate more data-efficient ViT variants (DeiT). Our resultsshow that while the performance between ViTs and CNNs ison par with a small benefit for ViTs, DeiTs outperform theformer if a reasonably large data set is available for training
        BibTeX:
        @article{Behrendt2022b,
          author = {Finn Behrendt and Debayan Bhattacharya and Julia Krüger and Roland Opfer and Alexander Schlaefer},
          title = {Data-Efficient Vision Transformers for Multi-Label Disease Classification on Chest Radiographs},
          journal = {Current Directions in Biomedical Engineering},
          year = {2022},
          volume = {8},
          number = {1},
          pages = {34--37},
          url = {https://doi.org/10.1515/cdbme-2022-0009},
          doi = {10.1515/cdbme-2022-0009}
        }
        
      • M. Bengs, F. Behrendt, M.-H. Laves, J. Kruuml;ger, R. Opfer, A. Schlaefer (2022), "Unsupervised anomaly detection in 3D brain MRI using deep learning with multi-task brain age prediction", In Medical Imaging 2022: Computer-Aided Diagnosis. Vol. 12033,1203314. SPIE. [Abstract] [BibTeX] [DOI] [URL]
      • Abstract: Lesion detection in brain Magnetic Resonance Images (MRIs) remains a challenging task. MRIs are typically read and interpreted by domain experts, which is a tedious and time-consuming process. Recently, unsupervised anomaly detection (UAD) in brain MRI with deep learning has shown promising results to provide a quick, initial assessment. So far, these methods only rely on the visual appearance of healthy brain anatomy for anomaly detection. Another biomarker for abnormal brain development is the deviation between the brain age and the chronological age, which is unexplored in combination with UAD. We propose deep learning for UAD in 3D brain MRI considering additional age information. We analyze the value of age information during training, as an additional anomaly score, and systematically study several architecture concepts. Based on our analysis, we propose a novel deep learning approach for UAD with multi-task age prediction. We use clinical T1-weighted MRIs of 1735 healthy subjects and the publicly available BraTs 2019 data set for our study. Our novel approach significantly improves UAD performance with an AUC of 92.60% compared to an AUC-score of 84.37% using previous approaches without age information.
        BibTeX:
        @inproceedings{Bengs2022,
          author = {Marcel Bengs and Finn Behrendt and Max-Heinrich Laves and Julia Krüger and Roland Opfer and Alexander Schlaefer},
          editor = {Karen Drukker and Khan M. Iftekharuddin and Hongbing Lu and Maciej A. Mazurowski and Chisako Muramatsu and Ravi K. Samala},
          title = {Unsupervised anomaly detection in 3D brain MRI using deep learning with multi-task brain age prediction},
          booktitle = {Medical Imaging 2022: Computer-Aided Diagnosis},
          publisher = {SPIE},
          year = {2022},
          volume = {12033},
          pages = {1203314},
          url = {https://doi.org/10.1117/12.2608120},
          doi = {10.1117/12.2608120}
        }
        
      • D. Bhattacharya, B. T. Becker, F. Behrendt, M. Bengs, D. Beyersdorff, D. Eggert, E. Petersen, F. Jansen, M. Petersen, B. Cheng, C. Betz, A. Schlaefer, A. S. Hoffmann (2022), "Supervised Contrastive Learning to Classify Paranasal Anomalies in the Maxillary Sinus", In Medical Image Computing and Computer Assisted Intervention -- MICCAI 2022. Cham ,429-438. Springer Nature Switzerland. [Abstract] [BibTeX] [DOI] [URL]
      • Abstract: Using deep learning techniques, anomalies in the paranasal sinus system can be detected automatically in MRI images and can be further analyzed and classified based on their volume, shape and other parameters like local contrast. However due to limited training data, traditional supervised learning methods often fail to generalize. Existing deep learning methods in paranasal anomaly classification have been used to diagnose at most one anomaly. In our work, we consider three anomalies. Specifically, we employ a 3D CNN to separate maxillary sinus volumes without anomaly from maxillary sinus volumes with anomaly. To learn robust representations from a small labelled dataset, we propose a novel learning paradigm that combines contrastive loss and cross-entropy loss. Particularly, we use a supervised contrastive loss that encourages embeddings of maxillary sinus volumes with and without anomaly to form two distinct clusters while the cross-entropy loss encourages the 3D CNN to maintain its discriminative ability. We report that optimising with both losses is advantageous over optimising with only one loss. We also find that our training strategy leads to label efficiency. With our method, a 3D CNN classifier achieves an AUROC of 0.85 textpm 0.03 while a 3D CNN classifier optimised with cross-entropy loss achieves an AUROC of 0.66 textpm 0.1. Our source code is available at https://github.com/dawnofthedebayan/SupConCE_MICCAI_22.
        BibTeX:
        @inproceedings{Bhattacharya2022b,
          author = {Bhattacharya, Debayan and Becker, Benjamin Tobias and Behrendt, Finn and Bengs, Marcel and Beyersdorff, Dirk and Eggert, Dennis and Petersen, Elina and Jansen, Florian and Petersen, Marvin and Cheng, Bastian and Betz, Christian and Schlaefer, Alexander and Hoffmann, Anna Sophie},
          editor = {Wang, Linwei and Dou, Qi and Fletcher, P. Thomas and Speidel, Stefanie and Li, Shuo},
          title = {Supervised Contrastive Learning to Classify Paranasal Anomalies in the Maxillary Sinus},
          booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2022},
          publisher = {Springer Nature Switzerland},
          year = {2022},
          pages = {429--438},
          url = {https://link.springer.com/chapter/10.1007/978-3-031-16437-8_41},
          doi = {10.1007/978-3-031-16437-8_41}
        }
        
      • D. Bhattacharya, F. Behrendt, A. Felicio-Briegel, V. Volgger, D. Eggert, C. Betz, A. Schlaefer (2022), "Learning Robust Representation for Laryngeal Cancer Classification in Vocal Folds from Narrow Band Images.", In Medical Imaging with Deep Learning. [BibTeX] [URL]
      • BibTeX:
        @inproceedings{Bhattacharya2022,
          author = {D. Bhattacharya and F. Behrendt and A. Felicio-Briegel and V. Volgger and D. Eggert and C. Betz and A. Schlaefer},
          title = {Learning Robust Representation for Laryngeal Cancer Classification in Vocal Folds from Narrow Band Images.},
          booktitle = {Medical Imaging with Deep Learning},
          year = {2022},
          url = {https://openreview.net/forum?id=nJd70UxI5hH}
        }
        
      • D. Bhattacharya, D. Eggert, C. Betz, A. Schlaefer (2022), "Squeeze and multi-context attention for polyp segmentation", International Journal of Imaging Systems and Technology. Vol. n/a(n/a) [Abstract] [BibTeX] [DOI] [URL]
      • Abstract: Abstract Artificial Intelligence-based Computer Aided Diagnostics (AI-CADx) have been proposed to help physicians in reducing misdetection of polyps in colonoscopy examination. The heterogeneity of a polyp's appearance makes detection challenging for physicians and AI-CADx. Towards building better AI-CADx, we propose an attention module called Squeeze and Multi-Context Attention (SMCA) that re-calibrates a feature map by providing channel and spatial attention by taking into consideration highly activated features and context of the features at multiple receptive fields simultaneously. We test the effectiveness of SMCA by incorporating it into the encoder of five popular segmentation models. We use five public datasets and construct intra-dataset and inter-dataset test sets to evaluate the generalizing capability of models with SMCA. Our intra-dataset evaluation shows that U-Net with SMCA and without SMCA has a precision of 0.86 ± 0.01 and 0.76 ± 0.02 respectively on CVC-ClinicDB. Our inter-dataset evaluation reveals that U-Net with SMCA and without SMCA has a precision of 0.62 ± 0.01 and 0.55 ± 0.09 respectively when trained on Kvasir-SEG and tested on CVC-ColonDB. Similar results are observed using other segmentation models and other public datasets. In conclusion, we demonstrate that incorporating SMCA into the segmentation models leads to an increase in generalizing capability of the segmentation models.
        BibTeX:
        @article{Bhattacharya2022a,
          author = {Bhattacharya, Debayan and Eggert, Dennis and Betz, Christian and Schlaefer, Alexander},
          title = {Squeeze and multi-context attention for polyp segmentation},
          journal = {International Journal of Imaging Systems and Technology},
          year = {2022},
          volume = {n/a},
          number = {n/a},
          url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/ima.22795},
          doi = {10.1002/ima.22795}
        }
        
      • D. Eggert, M. Bengs, S. Westermann, N. Gessert, A. O. H. Gerstner, N. A. Mueller, J. Bewarder, A. Schlaefer, C. Betz, W. Laffers (2022), "In vivo detection of head and neck tumors by hyperspectral imaging combined with deep learning methods.", Journal of Biophotonics. Vol. 15(3),e202100167. [Abstract] [BibTeX] [DOI] [URL]
      • Abstract: Abstract Currently, there are no fast and accurate screening methods available for head and neck cancer, the eighth most common tumor entity. For this study, we used hyperspectral imaging, an imaging technique for quantitative and objective surface analysis, combined with deep learning methods for automated tissue classification. As part of a prospective clinical observational study, hyperspectral datasets of laryngeal, hypopharyngeal and oropharyngeal mucosa were recorded in 98 patients before surgery in vivo. We established an automated data interpretation pathway that can classify the tissue into healthy and tumorous using convolutional neural networks with 2D spatial or 3D spatio-spectral convolutions combined with a state-of-the-art Densenet architecture. Using 24 patients for testing, our 3D spatio-spectral Densenet classification method achieves an average accuracy of 81%, a sensitivity of 83% and a specificity of 79%.
        BibTeX:
        @article{Eggert2022,
          author = {D. Eggert and M. Bengs and S. Westermann and N. Gessert and A. O. H. Gerstner and N. A. Mueller and J. Bewarder and A. Schlaefer and C. Betz, and W. Laffers},
          title = {In vivo detection of head and neck tumors by hyperspectral imaging combined with deep learning methods.},
          journal = {Journal of Biophotonics},
          year = {2022},
          volume = {15},
          number = {3},
          pages = {e202100167},
          url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/jbio.202100167},
          doi = {10.1002/jbio.202100167}
        }
        
      • S. Gerlach, T. Hofmann, C. Fuerweger, A. Schlaefer (2022), "TH-B-206-02: Fast Adaptive Replanning by Constrained Reoptimization for Intra-Fractional Non-Periodic Motion During SBRT of the Prostate", In Medical Physics. Vol. 49(6),E570-E570. [Abstract] [BibTeX] [URL]
      • Abstract: Purpose: Periodic motion of the target can be compensated by translational motion of the treatment beams in robotic SBRT. However, spontaneous, non-periodic displacement of the target may completely change the treatment geometry. In this case, translation is not sufficient since relative motion between the PTV and OARs can cause substantial deviations of dose in the OARs. Instead, solving a new optimization problem is required after partial dose delivery. We demonstrate this effect and propose a method for adaption by replanning which accounts for the change in the geometry. Methods:In contrast to typical adaptive strategies, our approach is based on complete and constrained replanning of the optimization problem which guarantees that no side effects such as higher doses than prescribed can occur in the treatment plan. We adapt the linear program to account for the changed treatment geometry which allows for fast reoptimization. For evaluation, we translate the target with random direction and length sampled from a truncated normal distribution with mean values from 12.5 to 30mm without overlap with OARs. We stu dy treatment plans with approximately 300 treatment beams and consider the motion to occur after 100 delivered beams. We solve in total 40,950 inverse planning problems for 45 patients. Re sults: Replanning can compensate for coverage loss and avoid constraint violation. Runtime of reoptimization is on average 14s. When not compensating for movement, coverage can decrease from 95% to 20%. While translation of the beam source can compensate for loss in coverage, dose constraints can be violated. E.g. maximum dose in the rectum is violated in 62% of treatment plans with translational compensation. Conclusion: For non-periodic target displacements, translational compensation can lead to suboptimal treatment plan delivery. Constrained replanning after partially delivery of the treatment plan can compensate for the negative impact on the delivered dose distribution
        BibTeX:
        @inproceedings{Gerlach2022b,
          author = {Gerlach, S and Hofmann, T and Fuerweger, C and Schlaefer, A},
          title = {TH-B-206-02: Fast Adaptive Replanning by Constrained Reoptimization for Intra-Fractional Non-Periodic Motion During SBRT of the Prostate},
          booktitle = {Medical Physics},
          year = {2022},
          volume = {49},
          number = {6},
          pages = {E570--E570},
          url = {https://aapm.onlinelibrary.wiley.com/doi/epdf/10.1002/mp.15769}
        }
        
      • S. Gerlach, T. Hofmann, C. Fürweger, A. Schlaefer (2022), "AI-based optimization for US-guided radiation therapy of the prostate", International Journal of Computer Assisted Radiology and Surgery. [Abstract] [BibTeX] [DOI] [URL]
      • Abstract: Fast volumetric ultrasound presents an interesting modality for continuous and real-time intra-fractional target tracking in radiation therapy of lesions in the abdomen. However, the placement of the ultrasound probe close to the target structures leads to blocking some beam directions.
        BibTeX:
        @article{Gerlach2022,
          author = {Gerlach, Stefan and Hofmann, Theresa and Fürweger, Christoph and Schlaefer, Alexander},
          title = {AI-based optimization for US-guided radiation therapy of the prostate},
          journal = {International Journal of Computer Assisted Radiology and Surgery},
          year = {2022},
          url = {https://doi.org/10.1007/s11548-022-02664-6},
          doi = {10.1007/s11548-022-02664-6}
        }
        
      • S. Gerlach A. Schlaefer (2022), "Robotic Systems in Radiotherapy and Radiosurgery.", Current Robotics Reports. [Abstract] [BibTeX] [DOI] [URL]
      • Abstract: This review provides an overview of robotic systems in radiotherapy and radiosurgery, with a focus on medical devices and recently proposed research systems. We summarize the key motivation for using robotic systems and illustrate the potential advantages.
        BibTeX:
        @article{Gerlach2022a,
          author = {S. Gerlach and A. Schlaefer},
          title = {Robotic Systems in Radiotherapy and Radiosurgery.},
          journal = {Current Robotics Reports},
          year = {2022},
          url = {https://doi.org/10.1007/s43154-021-00072-3},
          doi = {10.1007/s43154-021-00072-3}
        }
        
      • S. Grube, M. Neidhardt, S. Latus, A. Schlaefer (2022), "Influence of the Field of View on Shear Wave Velocity Estimation", Current Directions in Biomedical Engineering. Vol. 8(1),42-45. [Abstract] [BibTeX] [DOI] [URL]
      • Abstract: Tissue elasticity contains important informationfor physicians in diagnosis and treatment, and, e.g., can help intumor detection because tumors are stiffer than healthy tissue.Ultrasound shear wave elastography imaging (US-SWEI) canbe used to estimate tissue stiffness by measuring the velocityof induced shear waves. Commonly, a linear US probe is usedto track shear waves at a high imaging frequency in 2D. Real-time US-SWEI is limited by the required time for data process-ing. Hence, reducing the imaging field of view (FOV) is ben-eficial as it decreases the size of the acquired data and therebythe acquisition, transfer and processing time. However, a de-crease in the FOV has the disadvantage that shear waves aretracked over a smaller distance and thus, also fewer samplingpoints are available for velocity estimation. This trade-off be-tween a smaller FOV and thus, a smaller data size, and theaccuracy of shear wave velocity estimation is investigated inthis work. For this purpose, shear waves were tracked with alinear US probe in gelatin phantoms with four different stiff-ness values. During data processing, we reduced the FOV vir-tually from38.1 mmto2.1 mm. It was found that a reductionof the FOV to4.5 mmleads to an overestimation of up to fivetimes larger shear wave velocities but still allows to distinguishbetween phantoms of different stiffness. However, not all esti-mated velocity values could be clearly assigned to the correctstiffness value. The smallest studied FOV of2.1 mmwas notsufficient for distinguishing between the phantoms anymore
        BibTeX:
        @article{Grube2022,
          author = {Sarah Grube and Maximilian Neidhardt and Sarah Latus and Alexander Schlaefer},
          title = {Influence of the Field of View on Shear Wave Velocity Estimation},
          journal = {Current Directions in Biomedical Engineering},
          year = {2022},
          volume = {8},
          number = {1},
          pages = {42--45},
          url = {https://doi.org/10.1515/cdbme-2022-0011},
          doi = {10.1515/cdbme-2022-0011}
        }
        
      • M.-H. Laves, M. Tölle, A. Schlaefer, S. Engelhardt (2022), "Posterior temperature optimized Bayesian models for inverse problems in medical imaging", Medical Image Analysis. Vol. 78,102382. [Abstract] [BibTeX] [DOI] [URL]
      • Abstract: We present Posterior Temperature Optimized Bayesian Inverse Models (POTOBIM), an unsupervised Bayesian approach to inverse problems in medical imaging using mean-field variational inference with a fully tempered posterior. Bayesian methods exhibit useful properties for approaching inverse tasks, such as tomographic reconstruction or image denoising. A suitable prior distribution introduces regularization, which is needed to solve the ill-posed problem and reduces overfitting the data. In practice, however, this often results in a suboptimal posterior temperature, and the full potential of the Bayesian approach is not being exploited. In POTOBIM, we optimize both the parameters of the prior distribution and the posterior temperature with respect to reconstruction accuracy using Bayesian optimization with Gaussian process regression. Our method is extensively evaluated on four different inverse tasks on a variety of modalities with images from public data sets and we demonstrate that an optimized posterior temperature outperforms both non-Bayesian and Bayesian approaches without temperature optimization. The use of an optimized prior distribution and posterior temperature leads to improved accuracy and uncertainty estimation and we show that it is sufficient to find these hyperparameters per task domain. Well-tempered posteriors yield calibrated uncertainty, which increases the reliability in the predictions. Our source code is publicly available at github.com/Cardio-AI/mfvi-dip-mia.
        BibTeX:
        @article{Laves2022,
          author = {Max-Heinrich Laves and Malte Tölle and Alexander Schlaefer and Sandy Engelhardt},
          title = {Posterior temperature optimized Bayesian models for inverse problems in medical imaging},
          journal = {Medical Image Analysis},
          year = {2022},
          volume = {78},
          pages = {102382},
          url = {https://www.sciencedirect.com/science/article/pii/S1361841522000342},
          doi = {10.1016/j.media.2022.102382}
        }
        
      • L. Maack, L. Holstein, A. Schlaefer (2022), "GANs for generation of synthetic ultrasound images from small datasets", Current Directions in Biomedical Engineering. Vol. 8(1),17-20. [Abstract] [BibTeX] [DOI] [URL]
      • Abstract: The task of medical image classification is increas-ingly supported by algorithms. Deep learning methods likeconvolutional neural networks (CNNs) show superior perfor-mance in medical image analysis but need a high-quality train-ing dataset with a large number of annotated samples. Partic-ularly in the medical domain, the availability of such datasetsis rare due to data privacy or the lack of data sharing practicesamong institutes. Generative adversarial networks (GANs) areable to generate high quality synthetic images. This work in-vestigates the capabilities of different state-of-the-art GAN ar-chitectures in generating realistic breast ultrasound images ifonly a small amount of training data is available. In a secondstep, these synthetic images are used to augment the real ul-trasound image dataset utilized for training CNNs. The train-ing of both GANs and CNNs is conducted with systemati-cally reduced dataset sizes. The GAN architectures are ca-pable of generating realistic ultrasound images. GANs usingdata augmentation techniques outperform the baseline Style-GAN2 with respect to the Fréchet Inception distance by upto64.2%. CNN models trained with additional synthetic dataoutperform the baseline CNN model using only real data fortraining by up to15.3%with respect to the F1 score, espe-cially for datasets containing less than 100 images. As a con-clusion, GANs can successfully be used to generate syntheticultrasound images of high quality and diversity, improve clas-sification performance of CNNs and thus provide a benefit tocomputer-aided diagnostics
        BibTeX:
        @article{Maack2022,
          author = {Lennart Maack and Lennart Holstein and Alexander Schlaefer},
          title = {GANs for generation of synthetic ultrasound images from small datasets},
          journal = {Current Directions in Biomedical Engineering},
          year = {2022},
          volume = {8},
          number = {1},
          pages = {17--20},
          url = {https://doi.org/10.1515/cdbme-2022-0005},
          doi = {10.1515/cdbme-2022-0005}
        }
        
      • R. Mieling, C. Stapper, S. Gerlach, M. Neidhardt, S. Latus, M. Gromniak, P. Breitfeld, A. Schlaefer (2022), "Proximity-Based Haptic Feedback for Collaborative Robotic Needle Insertion", In Haptics: Science, Technology, Applications. Cham ,301-309. Springer International Publishing. [Abstract] [BibTeX]
      • Abstract: Collaborative robotic needle insertions have the potential to improve placement accuracy and safety, e.g., during epidural anesthesia. Epidural anesthesia provides effective regional pain management but can lead to serious complications, such as nerve injury or cerebrospinal fluid leakage. Robotic assistance might prevent inadvertent puncture by providing haptic feedback to the physician. Haptic feedback can be realized on the basis of force measurements at the needle. However, contact should be avoided for delicate structures. We propose a proximity-based method to provide feedback prior to contact. We measure the distance to boundary layers, visualize the proximity for the operator and further feedback it as a haptic resistance. We compare our approach to haptic feedback based on needle forces and visual feedback without haptics. Participants are asked to realize needle insertions with each of the three feedback modes. We use phantoms that mimic the structures punctured during epidural anesthesia. We show that visual feedback improves needle placement, but only proximity-based haptic feedback reduces accidental puncture. The puncture rate is 62% for force-based haptic feedback, 60% for visual feedback and 6% for proximity-based haptic feedback. Final needle placement inside the epidural space is achieved in 38%, 70% and 96% for force-based haptic, visual and proximity-based haptic feedback, respectively. Our results suggest that proximity-based haptic feedback could improve needle placement safety in the context of epidural anesthesia.
        BibTeX:
        @inproceedings{Mieling2022,
          author = {Mieling, Robin and Stapper, Carolin and Gerlach, Stefan and Neidhardt, Maximilian and Latus, Sarah and Gromniak, Martin and Breitfeld, Philipp and Schlaefer, Alexander},
          editor = {Seifi, Hasti and Kappers, Astrid M. L. and Schneider, Oliver and Drewing, Knut and Pacchierotti, Claudio and Abbasimoshaei, Alireza and Huisman, Gijs and Kern, Thorsten A.},
          title = {Proximity-Based Haptic Feedback for Collaborative Robotic Needle Insertion},
          booktitle = {Haptics: Science, Technology, Applications},
          publisher = {Springer International Publishing},
          year = {2022},
          pages = {301--309}
        }
        
      • M. Neidhardt, M. Bengs, S. Latus, S. Gerlach, C. J. Cyron, J. Sprenger, A. Schlaefer (2022), "Ultrasound Shear Wave Elasticity Imaging with Spatio-Temporal Deep Learning", IEEE Transactions on Biomedical Engineering. ,1-1. [Abstract] [BibTeX] [DOI] [URL]
      • Abstract: Ultrasound shear wave elasticity imaging is a valuable tool for quantifying the elastic properties of tissue. Typically, the shear wave velocity is derived and mapped to an elasticity value, which neglects information such as the shape of the propagating shear wave or push sequence characteristics. We present 3D spatio-temporal CNNs for fast local elasticity estimation from ultrasound data. This approach is based on retrieving elastic properties from shear wave propagation within small local regions. A large training data set is acquired with a robot from homogeneous gelatin phantoms ranging from 17.42 kPa to 126.05 kPa with various push locations. The results show that our approach can estimate elastic properties on a pixelwise basis with a mean absolute error of 5.014.37 kPa. Furthermore, we estimate local elasticity independent of the push location and can even perform accurate estimates inside the push region. For phantoms with embedded inclusions, we report a 53.93% lower MAE (7.50 kPa) and on the background of 85.24% (1.64 kPa) compared to a conventional shear wave method. Overall, our method offers fast local estimations of elastic properties with small spatio-temporal window sizes.
        BibTeX:
        @article{Neidhardt2022a,
          author = {Neidhardt, Maximilian and Bengs, Marcel and Latus, Sarah and Gerlach, Stefan and Cyron, Christian J. and Sprenger, Johanna and Schlaefer, Alexander},
          title = {Ultrasound Shear Wave Elasticity Imaging with Spatio-Temporal Deep Learning},
          journal = {IEEE Transactions on Biomedical Engineering},
          year = {2022},
          pages = {1-1},
          url = {https://arxiv.org/abs/2204.05745},
          doi = {10.1109/TBME.2022.3168566}
        }
        
      • M. Neidhardt, S. Gerlach, R. Mieling, M.-H. Laves, T. Weiß, M. Gromniak, A. Fitzek, D. Möbius, I. Kniep, A. Ron, J. Schädler, A. Heinemann, K. Püschel, B. Ondruschka, A. Schlaefer (2022), "Robotic Tissue Sampling for Safe Post-mortem Biopsy in Infectious Corpses", IEEE Transactions on Medical Robotics and Bionics. ,1-1. [Abstract] [BibTeX] [DOI] [URL]
      • Abstract: In pathology and legal medicine, the histopathological and microbiological analysis of tissue samples from infected deceased is a valuable information for developing treatment strategies during a pandemic such as COVID-19. However, a conventional autopsy carries the risk of disease transmission and may be rejected by relatives. We propose minimally invasive biopsy with robot assistance under CT guidance to minimize the risk of disease transmission during tissue sampling and to improve accuracy. A flexible robotic system for biopsy sampling is presented, which is applied to human corpses placed inside protective body bags. An automatic planning and decision system estimates optimal insertion point. Heat maps projected onto the segmented skin visualize the distance and angle of insertions and estimate the minimum cost of a puncture while avoiding bone collisions. Further, we test multiple insertion paths concerning feasibility and collisions. A custom end effector is designed for inserting needles and extracting tissue samples under robotic guidance. Our robotic post-mortem biopsy (RPMB) system is evaluated in a study during the COVID-19 pandemic on 20 corpses and 10 tissue targets, 5 of them being infected with SARS-CoV-2. The mean planning time including robot path planning is 5.72±1.67 s. Mean needle placement accuracy is 7.19±4.22 mm.
        BibTeX:
        @article{Neidhardt2022,
          author = {Neidhardt, Maximilian and Gerlach, Stefan and Mieling, Robin and Laves, Max-Heinrich and Weiß, Thorben and Gromniak, Martin and Fitzek, Antonia and Möbius, Dustin and Kniep, Inga and Ron, Alexandra and Schädler, Julia and Heinemann, Axel and Püschel, Klaus and Ondruschka, Benjamin and Schlaefer, Alexander},
          title = {Robotic Tissue Sampling for Safe Post-mortem Biopsy in Infectious Corpses},
          journal = {IEEE Transactions on Medical Robotics and Bionics},
          year = {2022},
          pages = {1-1},
          url = {https://arxiv.org/abs/2201.12168},
          doi = {10.1109/TMRB.2022.3146440}
        }
        
      • T. Sonntag, M. Bauer, J. Sprenger, S. Gerlach, P. Breitfeld, A. Schlaefer (2022), "Deep learning based segmentation of cervical blood vessels in ultrasound images", Abstracts EUROANAESTHESIA 2022., In European Journal of Anaesthesiology. ,41. [Abstract] [BibTeX]
      • Abstract: Puncture of central vessels is a frequently used therapeutic and diagnostic procedure. The use of ultrasound (US) during needle insertion has become the gold standard. Handling the US probe and needle is challenging, especially in difficult anatomic conditions. Our long-term vision is a deep learning based and augmented reality (AR) assisted needle puncture. We aim to visualize the vessel structures in 3D based on 2D US image segmentation. While punctuating, the relative needle tip position and relevant vessels can be highlighted via AR lenses to optimize the image guidance process
        BibTeX:
        @inproceedings{Sonntag2022,
          author = {T. Sonntag, M. Bauer, J. Sprenger, S. Gerlach, P. Breitfeld, A. Schlaefer},
          title = {Deep learning based segmentation of cervical blood vessels in ultrasound images},
          booktitle = {European Journal of Anaesthesiology},
          journal = {Abstracts EUROANAESTHESIA 2022},
          year = {2022},
          pages = {41}
        }
        
      • J. Sprenger, M. Bengs, S. Gerlach, M. Neidhardt, A. Schlaefer (2022), "Systematic analysis of volumetric ultrasound parameters for markerless 4D motion tracking", International Journal of Computer Assisted Radiology and Surgery. [Abstract] [BibTeX] [DOI] [URL]
      • Abstract: Motion compensation is an interesting approach to improve treatments of moving structures. For example, target motion can substantially affect dose delivery in radiation therapy, where methods to detect and mitigate the motion are widely used. Recent advances in fast, volumetric ultrasound have rekindled the interest in ultrasound for motion tracking. We present a setup to evaluate ultrasound based motion tracking and we study the effect of imaging rate and motion artifacts on its performance.
        BibTeX:
        @article{Sprenger2022,
          author = {Sprenger, Johanna and Bengs, Marcel and Gerlach, Stefan and Neidhardt, Maximilian and Schlaefer, Alexander},
          title = {Systematic analysis of volumetric ultrasound parameters for markerless 4D motion tracking},
          journal = {International Journal of Computer Assisted Radiology and Surgery},
          year = {2022},
          url = {https://doi.org/10.1007/s11548-022-02665-5},
          doi = {10.1007/s11548-022-02665-5}
        }
        
      • J. Sprenger, M. Neidhardt, S. Latus, S. Grube, M. Fischer, A. Schlaefer (2022), "Surface Scanning for Navigation Using High-Speed Optical Coherence Tomography", Current Directions in Biomedical Engineering. Vol. 8(1),62-65. [Abstract] [BibTeX] [DOI] [URL]
      • Abstract: Medical interventions are often guided by opti-cal tracking systems and optical coherence tomography hasshown promising results for markerless tracking of soft tissue.The high spatial resolution and subsurface information con-tain valuable information about the underlying tissue structureand tracking of certain target structures is in principle possible.However, the small field-of-view complicates the selection ofsuitable regions-of-interest for tracking. Therefore, we extendan experimental setup and perform volumetric surface scan-ning of target structures to enlarge the field-of-view. We showthat the setup allows for data acquisition and that precise merg-ing of the volumes is possible with mean absolute errors from0.041 mmto0.097 mm.
        BibTeX:
        @article{Sprenger2022a,
          author = {Johanna Sprenger and Maximilian Neidhardt and Sarah Latus and Sarah Grube and Martin Fischer and Alexander Schlaefer},
          title = {Surface Scanning for Navigation Using High-Speed Optical Coherence Tomography},
          journal = {Current Directions in Biomedical Engineering},
          year = {2022},
          volume = {8},
          number = {1},
          pages = {62--65},
          url = {https://doi.org/10.1515/cdbme-2022-0016},
          doi = {10.1515/cdbme-2022-0016}
        }
        

        2021

        • K. P. Abdolazizi, K. Linka, J. Sprenger, M. Neidhardt, A. Schlaefer, C. J. Cyron (2021), "Concentration-Specific Constitutive Modeling of Gelatin Based on Artificial Neural Networks", PAMM. Vol. 20(1),e202000284. [Abstract] [BibTeX] [DOI] [URL]
        • Abstract: Abstract Gelatin phantoms are frequently used in the development of surgical devices and medical imaging techniques. They exhibit mechanical properties similar to soft biological tissues [1] but can be handled at a much lower cost. Moreover, they enable a better reproducibility of experiments. Accurate constitutive models for gelatin are therefore of great interest for biomedical engineering. In particular it is important to capture the dependence of mechanical properties of gelatin on its concentration. Herein we propose a simple machine learning approach to this end. It uses artificial neural networks (ANN) for learning from indentation data the relation between the concentration of ballistic gelatin and the resulting mechanical properties.
          BibTeX:
          @article{Abdolazizi2021,
            author = {K. P. Abdolazizi, K. Linka, J. Sprenger, M. Neidhardt, A. Schlaefer, C. J. Cyron},
            title = {Concentration-Specific Constitutive Modeling of Gelatin Based on Artificial Neural Networks},
            journal = {PAMM},
            year = {2021},
            volume = {20},
            number = {1},
            pages = {e202000284},
            url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/pamm.202000284},
            doi = {10.1002/pamm.202000284}
          }
          
        • K. P. Abdolazizi, K. Linka, J. Sprenger, M. Neidhardt, A. Schlaefer, C. J. Cyron (2021), "Identification of the concentration‐dependent viscoelastic constitutive parameters of gelatin by combining computational mechanics and machine learning", Proceedings in applied mathematics and mechanics. Vol. 21(1),e202100250. [Abstract] [BibTeX] [URL]
        • Abstract: Since the mechanical properties of gelatin are similar to those of soft biological tissues, gelatin is a commonly used surrogate for real tissues, for example in safety engineering or medical engineering. Additional advantages of gelatin over real tissues are lower costs and better reproducibility of experiments. Therefore, constitutive models of gelatin are of great interest. In particular, it is important to capture the concentration dependence of the mechanical properties since the gelatin mass concentration significantly affects the constitutive behavior. To this end, we propose a hybrid approach linking artificial neural networks (ANN) and classical constitutive modeling to relate the gelatin's concentration to its viscoelastic material properties using indentation data.
          BibTeX:
          @article{Abdolazizi2021a,
            author = {Kian Philipp Abdolazizi and Kevin Linka and Johanna Sprenger and Maximilian Neidhardt and Alexander Schlaefer and Christian J. Cyron},
            title = {Identification of the concentration‐dependent viscoelastic constitutive parameters of gelatin by combining computational mechanics and machine learning},
            journal = {Proceedings in applied mathematics and mechanics},
            year = {2021},
            volume = {21},
            number = {1},
            pages = {e202100250},
            url = {http://hdl.handle.net/11420/11562}
          }
          
        • L. Bargsten, D. Klisch, K. A. Riedl, T. Wissel, F. J. Brunner, K. Schaefers, M. Grass, S. Blankenberg, M. Seiffert, A. Schlaefer (2021), "Deep learning for guidewire detection in intravascular ultrasound images:", Current Directions in Biomedical Engineering. Vol. 7(1),106-110. [Abstract] [BibTeX] [DOI] [URL]
        • Abstract: Algorithms for automated analysis of intravascular ultrasound (IVUS) images can be disturbed by guidewires, which are often encountered when treating bifurcations in percutaneous coronary interventions. Detecting guidewires in advance can therefore help avoiding potential errors. This task is not trivial, since guidewires appear rather small compared to other relevant objects in IVUS images. We employed CNNs with additional multi-task learning as well as different guidewire-specific regularizations to enable and improve guidewire detection. In this context, we developed a network block which generates heatmaps that highlight guidewires without the need of localization annotations. The guidewire detection results reach values of 0.931 in terms of the F1-score and 0.996 in terms of area under curve (AUC). Comparing thresholded guidewire heatmaps with ground truth segmentation masks leads to a Dice score of 23.1 % and an average Hausdorff distance of 1.45 mm. Guidewire detection has proven to be a task that CNNs can handle quite well. Employing multi-task learning and guidewire-specific regularizations further improve detection results and enable generation of heatmaps that indicate the position of guidewires without actual labels
          BibTeX:
          @article{Bargsten2021c,
            author = {L. Bargsten and D. Klisch and K. A. Riedl and T. Wissel and F. J. Brunner and K. Schaefers and M. Grass and S. Blankenberg and M. Seiffert and A. Schlaefer},
            title = {Deep learning for guidewire detection in intravascular ultrasound images:},
            journal = {Current Directions in Biomedical Engineering},
            year = {2021},
            volume = {7},
            number = {1},
            pages = {106-110},
            url = {https://doi.org/10.1515/cdbme-2021-1023},
            doi = {10.1515/cdbme-2021-1023}
          }
          
        • L. Bargsten, S. Raschka, A. Schlaefer (2021), "Capsule networks for segmentation of small intravascular ultrasound image datasets", International Journal of Computer Assisted Radiology and Surgery. Vol. 16(8),1243-1254. [Abstract] [BibTeX] [DOI] [URL]
        • Abstract: Intravascular ultrasound (IVUS) imaging is crucial for planning and performing percutaneous coronary interventions. Automatic segmentation of lumen and vessel wall in IVUS images can thus help streamlining the clinical workflow. State-of-the-art results in image segmentation are achieved with data-driven methods like convolutional neural networks (CNNs). These need large amounts of training data to perform sufficiently well but medical image datasets are often rather small. A possibility to overcome this problem is exploiting alternative network architectures like capsule networks.
          BibTeX:
          @article{Bargsten2021,
            author = {L. Bargsten and S. Raschka and A. Schlaefer},
            title = {Capsule networks for segmentation of small intravascular ultrasound image datasets},
            journal = {International Journal of Computer Assisted Radiology and Surgery},
            year = {2021},
            volume = {16},
            number = {8},
            pages = {1243-1254},
            url = {https://doi.org/10.1007/s11548-021-02417-x},
            doi = {10.1007/s11548-021-02417-x}
          }
          
        • L. Bargsten, K. A. Riedl, T. Wissel, F. J. Brunner, K. Schaefers, M. Grass, S. Blankenberg, M. Seiffert, A. Schlaefer (2021), "Attention via Scattering Transforms for Segmentation of Small Intravascular Ultrasound Data Sets", In Proceedings of the Fourth Conference on Medical Imaging with Deep Learning., 07--09 Jul, 2021. Vol. 143,34-47. PMLR. [Abstract] [BibTeX] [URL]
        • Abstract: Using intracoronary imaging modalities like intravascular ultrasound (IVUS) has a positive impact on the results of percutaneous coronary interventions. Efficient extraction of important vessel metrics like lumen diameter, vessel wall thickness or plaque burden via automatic segmentation of IVUS images can improve the clinical workflow. State-of-the-art segmentation results are usually achieved by data-driven methods like convolutional neural networks (CNNs). However, clinical data sets are often rather small leading to extraction of image features which are not very meaningful and thus decreasing performance. This is also the case for some applications which inherently allow for only small amounts of available data, e.g., detection of diseases with extremely small prevalence or online-adaptation of an existing algorithm to individual patients. In this work we investigate how integrating scattering transformations - as special forms of wavelet transformations - into CNNs could improve the extraction of meaningful features. To this end, we developed a novel network module which uses features of a scattering transform for an attention mechanism. We observed that this approach improves the results of calcium segmentation up to 8.2% (relatively) in terms of the Dice coefficient and 24.8% in terms of the modified Hausdorff distance. In the case of lumen and vessel wall segmentation, the improvements are up to 2.3% (relatively) in terms of the Dice coefficient and 30.8% in terms of the modified Hausdorff distance.Incorporating scattering transformations as a component of an attention block into CNNs improves the segmentation results on small IVUS segmentation data sets. In general, scattering transformations can help in situations where efficient feature extractors can not be learned via the training data. This makes our attention module an interesting candidate for applications like few-shot learning for patient adaptation or detection of rare diseases.
          BibTeX:
          @inproceedings{Bargsten2021d,
            author = {L. Bargsten, K. A. Riedl, T. Wissel, F. J. Brunner, K. Schaefers, M. Grass, S. Blankenberg, M. Seiffert, A. Schlaefer},
            editor = {Heinrich, Mattias and Dou, Qi and de Bruijne, Marleen and Lellmann, Jan and Schläfer, Alexander and Ernst, Floris},
            title = {Attention via Scattering Transforms for Segmentation of Small Intravascular Ultrasound Data Sets},
            booktitle = {Proceedings of the Fourth Conference on Medical Imaging with Deep Learning},
            publisher = {PMLR},
            year = {2021},
            volume = {143},
            pages = {34--47},
            url = {https://proceedings.mlr.press/v143/bargsten21a.html}
          }
          
        • L. Bargsten, K. A. Riedl, T. Wissel, F. J. Brunner, K. Schaefers, M. Grass, S. Blankenberg, M. Seiffert, A. Schlaefer (2021), "Deep learning for calcium segmentation in intravascular ultrasound images:", Current Directions in Biomedical Engineering. Vol. 7(1),96-100. [Abstract] [BibTeX] [DOI] [URL]
        • Abstract: Knowing the shape of vascular calcifications is crucial for appropriate planning and conductance of percutaneous coronary interventions. The clinical workflow can therefore benefit from automatic segmentation of calcified plaques in intravascular ultrasound (IVUS) images. To solve segmentation problems with convolutional neural networks (CNNs), large datasets are usually required. However, datasets are often rather small in the medical domain. Hence, developing and investigating methods for increasing CNN performance on small datasets can help on the way towards clinically relevant results. We compared two state-of-the-art CNN architectures for segmentation, U-Net and DeepLabV3, and investigated how incorporating auxiliary image data with vessel wall and lumen annotations improves the calcium segmentation performance by using these either for pre-training or multi-task training. DeepLabV3 outperforms U-Net with up to 6.3 % by means of the Dice coefficient and 36.5 % by means of the average Hausdorff distance. Using auxiliary data improves the segmentation performance in both cases, whereas the multi-task approach outperforms the pre-training approach. The improvements of the multi-task approach in contrast to not using auxiliary-data at all is 5.7 % for the Dice coefficient and 42.9 % for the average Hausdorff distance. Automatic segmentation of calcified plaques in IVUS images is a demanding task due to their relatively small size compared to the image dimensions and due to visual ambiguities with other image structures. We showed that this problem can generally be tackled by CNNs. Furthermore, we were able to improve the performance by a multi-task learning approach with auxiliary segmentation data
          BibTeX:
          @article{Bargsten2021b,
            author = {L. Bargsten and K. A. Riedl and T. Wissel and F. J. Brunner and K. Schaefers and M. Grass and S. Blankenberg and M. Seiffert and A. Schlaefer},
            title = {Deep learning for calcium segmentation in intravascular ultrasound images:},
            journal = {Current Directions in Biomedical Engineering},
            year = {2021},
            volume = {7},
            number = {1},
            pages = {96-100},
            url = {https://doi.org/10.1515/cdbme-2021-1021},
            doi = {10.1515/cdbme-2021-1021}
          }
          
        • L. Bargsten, K. A. Riedl, T. Wissel, F. J. Brunner, K. Schaefers, J. Sprenger, M. Grass, M. Seiffert, S. Blankenberg, A. Schlaefer (2021), "Tailored methods for segmentation of intravascular ultrasound images via convolutional neural networks", In Medical Imaging 2021: Ultrasonic Imaging and Tomography. Vol. 11602,1-7. SPIE. [Abstract] [BibTeX] [DOI] [URL]
        • Abstract: Automatic delineation of relevant structures in intravascular imaging can support percutaneous coronary interventions (PCIs), especially when dealing with rather demanding cases. We found three major error types which occur regularly when segmenting lumen and wall of morphologically complex vessels with convolutional neural networks (CNNs). In order to reduce these three error types, we developed three IVUS-specific methods which are able to improve generalizability of state-of-the-art CNNs for IVUS segmentation tasks. These methods are based on three concepts: speckle statistics, artery shape priors via independent component analysis (ICA) and the concentricity condition of lumen and vessel wall. We found that all three methods outperform the baseline. Since all three concepts can be readily transferred to intravascular optical coherence tomography (IVOCT), we expect these findings can support the segmentation of corresponding images as well
          BibTeX:
          @inproceedings{Bargsten2021a,
            author = {L. Bargsten and K. A. Riedl and T. Wissel and F. J. Brunner and K. Schaefers and J. Sprenger and M. Grass and M. Seiffert and S. Blankenberg and A. Schlaefer},
            editor = {Brett C. Byram and Nicole V. Ruiter},
            title = {Tailored methods for segmentation of intravascular ultrasound images via convolutional neural networks},
            booktitle = {Medical Imaging 2021: Ultrasonic Imaging and Tomography},
            publisher = {SPIE},
            year = {2021},
            volume = {11602},
            pages = {1-7},
            url = {https://doi.org/10.1117/12.2580720},
            doi = {10.1117/12.2580720}
          }
          
        • M. Bengs, F. Behrendt, J. Krüger, R. Opfer, A. Schlaefer (2021), "Three-dimensional deep learning with spatial erasing for unsupervised anomaly segmentation in brain MRI.", International journal of computer assisted radiology and surgery. Vol. 16.((9),),1413-1423. Springer:. [Abstract] [BibTeX] [DOI]
        • Abstract: Brain Magnetic Resonance Images (MRIs) are essential for the diagnosis of neurological diseases. Recently, deep learning methods for unsupervised anomaly detection (UAD) have been proposed for the analysis of brain MRI. These methods rely on healthy brain MRIs and eliminate the requirement of pixel-wise annotated data compared to supervised deep learning. While a wide range of methods for UAD have been proposed, these methods are mostly 2D and only learn from MRI slices, disregarding that brain lesions are inherently 3D and the spatial context of MRI volumes remains unexploited
          BibTeX:
          @article{Bengs2021a,
            author = {M. Bengs and F. Behrendt and J. Krüger and R. Opfer and A. Schlaefer},
            title = {Three-dimensional deep learning with spatial erasing for unsupervised anomaly segmentation in brain MRI.},
            journal = {International journal of computer assisted radiology and surgery},
            publisher = {Springer:},
            year = {2021},
            volume = {16.},
            number = {(9),},
            pages = {1413-1423},
            doi = {10.1007/s11548-021-02451-9}
          }
          
        • M. Bengs, F. Behrendt, J. Krüger, R. Opfer, A. Schlaefer (2021), "Three-dimensional deep learning with spatial erasing for unsupervised anomaly segmentation in brain MRI.", International Journal of Computer Assisted Radiology and Surgery. Vol. 16.((9),),1413-1423. [Abstract] [BibTeX] [DOI] [URL]
        • Abstract: Brain Magnetic Resonance Images (MRIs) are essential for the diagnosis of neurological diseases. Recently, deep learning methods for unsupervised anomaly detection (UAD) have been proposed for the analysis of brain MRI. These methods rely on healthy brain MRIs and eliminate the requirement of pixel-wise annotated data compared to supervised deep learning. While a wide range of methods for UAD have been proposed, these methods are mostly 2D and only learn from MRI slices, disregarding that brain lesions are inherently 3D and the spatial context of MRI volumes remains unexploited.
          BibTeX:
          @article{Bengs2021b,
            author = {M. Bengs and F. Behrendt and J. Krüger and R. Opfer and A. Schlaefer},
            title = {Three-dimensional deep learning with spatial erasing for unsupervised anomaly segmentation in brain MRI.},
            journal = {International Journal of Computer Assisted Radiology and Surgery},
            year = {2021},
            volume = {16.},
            number = {(9),},
            pages = {1413-1423},
            url = {https://doi.org/10.1007/s11548-021-02451-9},
            doi = {10.1007/s11548-021-02451-9}
          }
          
        • M. Bengs, M. Bockmayr, U. Schüller, A. Schlaefer (2021), "Medulloblastoma tumor classification using deep transfer learning with multi-scale EfficientNets.", In Medical Imaging 2021: Digital Pathology. Vol. 11603.,70-75. SPIE:. [Abstract] [BibTeX] [DOI] [URL]
        • Abstract: Medulloblastoma (MB) is the most common malignant brain tumor in childhood. The diagnosis is generally based on the microscopic evaluation of histopathological tissue slides. However, visual-only assessment of histopathological patterns is a tedious and time-consuming task and is also affected by observer variability. Hence, automated MB tumor classification could assist pathologists by promoting consistency and robust quantification. Recently, convolutional neural networks (CNNs) have been proposed for this task, while transfer learning has shown promising results. In this work, we propose an end-to-end MB tumor classification and explore transfer learning with various input sizes and matching network dimensions. We focus on differentiating between the histological subtypes classic and desmoplastic/nodular. For this purpose, we systematically evaluate recently proposed EfficientNets, which uniformly scale all dimensions of a CNN. Using a data set with 161 cases, we demonstrate that pre-trained EfficientNets with larger input resolutions lead to significant performance improvements compared to commonly used pre-trained CNN architectures. Also, we highlight the importance of transfer learning, when using such large architectures. Overall, our best performing method achieves an F1-Score of 80.1%
          BibTeX:
          @inproceedings{Bengs2021c,
            author = {M. Bengs and M. Bockmayr and U. Schüller and A. Schlaefer},
            editor = {In John E. Tomaszewski and Aaron D. Ward (Eds.)},
            title = {Medulloblastoma tumor classification using deep transfer learning with multi-scale EfficientNets.},
            booktitle = {Medical Imaging 2021: Digital Pathology},
            publisher = {SPIE:},
            year = {2021},
            volume = {11603.},
            pages = {70-75},
            url = {https://doi.org/10.1117/12.2580717},
            doi = {10.1117/12.2580717}
          }
          
        • M. Bengs, S. Pant, M. Bockmayr, U. Schüller, A. Schlaefer (2021), "Multi-Scale Input Strategies for Medulloblastoma Tumor Classification using Deep Transfer Learning:", Current Directions in Biomedical Engineering. Vol. 7(1),63-66. [BibTeX] [DOI] [URL]
        • BibTeX:
          @article{Bengs2021,
            author = {M. Bengs and S. Pant and M. Bockmayr and U. Schüller and A. Schlaefer},
            title = {Multi-Scale Input Strategies for Medulloblastoma Tumor Classification using Deep Transfer Learning:},
            journal = {Current Directions in Biomedical Engineering},
            year = {2021},
            volume = {7},
            number = {1},
            pages = {63--66},
            url = {https://doi.org/10.1515/cdbme-2021-1014},
            doi = {10.1515/cdbme-2021-1014}
          }
          
        • D. Bhattacharya, C. Betz, D. Eggert, A. Schlaefer (2021), "Dual Parallel Reverse Attention Edge Network : DPRA-EdgeNet.", Nordic Machine Intelligence, MedAI2021. Vol. 1.((1),),11-13 Second place in challenge task. [Abstract] [BibTeX] [DOI]
        • Abstract: In this paper, we propose the Dual Parallel Reverse AttentionEdge Network (DPRA-EdgeNet), an architecture that jointlylearns to segment an object and its edges. We compareour model against three popular segmentation models anddemonstrate that our model improves the segmentationaccuracy on the Kvasir-SEG dataset and the Kvasir-Instrumentdataset
          BibTeX:
          @article{Bhattacharya2021,
            author = {D. Bhattacharya and C. Betz and D. Eggert and A. Schlaefer},
            title = {Dual Parallel Reverse Attention Edge Network : DPRA-EdgeNet.},
            journal = {Nordic Machine Intelligence, MedAI2021},
            year = {2021},
            volume = {1.},
            number = {(1),},
            pages = {11-13 Second place in challenge task},
            doi = {10.5617/nmi.9116}
          }
          
        • D. Bhattacharya, C. Betz, D. Eggert, A. Schlaefer (2021), "Self-Supervised U-Net for Segmenting Flat and Sessile Polyps.", In SPIE Medical Imaging Symposium 2021. [Abstract] [BibTeX] [URL]
        • Abstract: Colorectal Cancer(CRC) poses a great risk to public health. It is the third most common cause of cancer in the US. Development of colorectal polyps is one of the earliest signs of cancer. Early detection and resection of polyps can greatly increase survival rate to 90%. Manual inspection can cause misdetections because polyps vary in color, shape, size and appearance. To this end, Computer-Aided Diagnosis systems(CADx) has been proposed that detect polyps by processing the colonoscopic videos. The system acts a secondary check to help clinicians reduce misdetections so that polyps may be resected before they transform to cancer. Polyps vary in color, shape, size, texture and appearance. As a result, the miss rate of polyps is between 6% and 27% despite the prominence of CADx solutions. Furthermore, sessile and flat polyps which have diameter less than 10 mm are more likely to be undetected. Convolutional Neural Networks(CNN) have shown promising results in polyp segmentation. However, all of these works have a supervised approach and are limited by the size of the dataset. It was observed that smaller datasets reduce the segmentation accuracy of ResUNet++. We train a U-Net to inpaint randomly dropped out pixels in the image as a proxy task. The dataset we use for pre-training is Kvasir-SEG dataset. This is followed by a supervised training on the limited Kvasir-Sessile dataset. Our experimental results demonstrate that with limited annotated dataset and a larger unlabeled dataset, self-supervised approach is a better alternative than fully supervised approach. Specifically, our self-supervised U-Net performs better than five segmentation models which were trained in supervised manner on the Kvasir-Sessile dataset.
          BibTeX:
          @conference{Bhattacharya2021a,
            author = {D. Bhattacharya and C. Betz and D. Eggert and A. Schlaefer},
            title = {Self-Supervised U-Net for Segmenting Flat and Sessile Polyps.},
            booktitle = {SPIE Medical Imaging Symposium 2021},
            year = {2021},
            url = {https://arxiv.org/abs/2110.08776#}
          }
          
        • D. B. Ellebrecht, N. Hessler, A. Schlaefer, N. Gessert (2021), "Confocal Laser Microscopy for in vivo Intraoperative Application: Diagnostic Accuracy of Investigator and Machine Learning Strategies." Vol. 37.((6),),533-541. [Abstract] [BibTeX] [DOI] [URL]
        • Abstract: Background: Confocal laser microscopy (CLM) is one of the optical techniques that are promising methods of intraoperative in vivo real-time tissue examination based on tissue fluorescence. However, surgeons might struggle interpreting CLM images intraoperatively due to different tissue characteristics of different tissue pathologies in clinical reality. Deep learning techniques enable fast and consistent image analysis and might support intraoperative image interpretation. The objective of this study was to analyze the diagnostic accuracy of newly trained observers in the evaluation of normal colon and peritoneal tissue and colon cancer and metastasis, respectively, and to compare it with that of convolutional neural networks (CNNs). Methods: Two hundred representative CLM images of the normal and malignant colon and peritoneal tissue were evaluated by newly trained observers (surgeons and pathologists) and CNNs (VGG-16 and Densenet121), respectively, based on tissue dignity. The primary endpoint was the correct detection of the normal and cancer/metastasis tissue measured by sensitivity and specificity of both groups. Additionally, positive predictive values (PPVs) and negative predictive values (NPVs) were calculated for the newly trained observer group. The interobserver variability of dignity evaluation was calculated using kappa statistic. The F1-score and area under the curve (AUC) were used to evaluate the performance of image recognition of the CNNs’ training scenarios. Results: Sensitivity and specificity ranged between 0.55 and 1.0 (pathologists: 0.66-0.97; surgeons: 0.55-1.0) and between 0.65 and 0.96 (pathologists: 0.68-0.93; surgeons: 0.65-0.96), respectively. PPVs were 0.75 and 0.90 in the pathologists’ group and 0.73-0.96 in the surgeons’ group, respectively. NPVs were 0.73 and 0.96 for pathologists’ and between 0.66 and 1.00 for surgeons’ tissue analysis. The overall interobserver variability was 0.54. Depending on the training scenario, cancer/metastasis tissue was classified with an AUC of 0.77-0.88 by VGG-16 and 0.85-0.89 by Densenet121. Transfer learning improved performance over training from scratch. Conclusions: Newly trained investigators are able to learn CLM images features and interpretation rapidly, regardless of their clinical experience. Heterogeneity in tissue diagnosis and a moderate interobserver variability reflect the clinical reality more realistic. CNNs provide comparable diagnostic results as clinical observers and could improve surgeons’ intraoperative tissue assessment.
          BibTeX:
          @misc{Ellebrecht2021,
            author = {D. B. Ellebrecht and N. Hessler and A. Schlaefer and N. Gessert},
            title = {Confocal Laser Microscopy for in vivo Intraoperative Application: Diagnostic Accuracy of Investigator and Machine Learning Strategies.},
            year = {2021},
            volume = {37.},
            number = {(6),},
            pages = {533-541},
            url = {https://www.karger.com/DOI/10.1159/000517146},
            doi = {10.1159/000517146}
          }
          
        • J. F. Fast, H. R. Dava, A. K. Rüppel, D. Kundrat, M. Krauth, M.-H. Laves, S. Spindeldreier, L. A. Kahrs, M. Ptok (2021), "Stereo Laryngoscopic Impact Site Prediction for Droplet-Based Stimulation of the Laryngeal Adductor Reflex", IEEE Access. Vol. 9,112177-112192. [Abstract] [BibTeX] [DOI]
        • Abstract: The laryngeal adductor reflex (LAR) is a vital reflex of the human larynx. LAR malfunctions may cause life-threatening aspiration events. An objective, noninvasive, and reproducible method for LAR assessment is still lacking. Stimulation of the larynx by droplet impact, termed Microdroplet Impulse Testing of the LAR (MIT-LAR), may remedy this situation. However, droplet instability and imprecise stimulus application thus far prevented MIT-LAR from gaining clinical relevance. We present a system comprising two alternative, custom-built stereo laryngoscopes, each offering a distinct set of properties, a droplet applicator module, and image/point cloud processing algorithms to enable a targeted, droplet-based LAR stimulation. Droplet impact site prediction (ISP) is achieved by droplet trajectory identification and spatial target reconstruction. The reconstruction and ISP accuracies were experimentally evaluated. Global spatial reconstruction errors at the glottal area of (0.3±0.3) mm and (0.4±0.3) mm and global ISP errors of (0.9±0.6) mm and (1.3±0.8) mm were found for a rod lens-based and an alternative, fiberoptic laryngoscope, respectively. In the case of the rod lens-based system, 96% of all observed ISP error values are inferior to 2 mm; a value of 80% was found with the fiberoptic assembly. This contribution represents an important step towards introducing a reproducible and objective LAR screening method into the clinical routine.
          BibTeX:
          @article{Fast2021,
            author = {J. F. Fast and H. R. Dava and A. K. Rüppel and D. Kundrat and M. Krauth and M.-H. Laves and S. Spindeldreier and L. A. Kahrs and M. Ptok},
            title = {Stereo Laryngoscopic Impact Site Prediction for Droplet-Based Stimulation of the Laryngeal Adductor Reflex},
            journal = {IEEE Access},
            year = {2021},
            volume = {9},
            pages = {112177-112192},
            doi = {10.1109/ACCESS.2021.3103049}
          }
          
        • S. Gerlach, M. Neidhardt, T. Weiß, M.-H. Laves, C. Stapper, M. Gromniak, I. Kniep, D. Möbius, A. Heinemann, B. Ondruschka, A. Schlaefer (2021), "Needle insertion planning for obstacle avoidance in robotic biopsy.", Current Directions in Biomedical Engineering. Vol. 7.((2),),779-782. [Abstract] [BibTeX] [DOI] [URL]
        • Abstract: Understanding the underlying pathology in different tissues and organs is crucial when fighting pandemics like COVID-19. During conventional autopsy, large tissue sample sets of multiple organs can be collected from cadavers. However, direct contact with an infectious corpse is associated with the risk of disease transmission and relatives of the deceased might object to a conventional autopsy. To overcome
          these drawbacks, we consider minimally invasive autopsies with robotic needle placement as a practical alternative. One challenge in needle based biopsies is avoidance of dense obstacles, including bones or embedded medical devices such as pacemakers. We demonstrate an approach for automated planning and visualising suitable needle insertion points based on computed tomography (CT) scans. Needle paths are modeled by a line between insertion and target point and needle insertion path occlusion from obstacles is determined by using central projections from the biopsy target to the surface of the skin. We project the maximum and minimum CT attenuation, insertion depth, and standard deviation of CT attenuation along the needle path and create two-dimensional intensity-maps projected on the skin. A cost function considering these metrics is introduced and minimized to find an optimal biopsy needle path. Furthermore, we disregard insertion points without sufficient room for needle placement. For visualisation, we display the color-coded cost function so that suitable points for needle insertion become visible.
          We evaluate our system on 10 post mortem CTs with six biopsy targets in abdomen and thorax annotated by medical experts. For all patients and targets an optimal insertion path is found. The mean distance to the target ranges from (49.9 ± 12.9) mm for the spleen to (90.1 ± 25.8) mm for the pancreas
          BibTeX:
          @article{Gerlach2021,
            author = {S. Gerlach and M. Neidhardt and T. Weiß and M.-H. Laves and C. Stapper and M. Gromniak and I. Kniep and D. Möbius and A. Heinemann and B. Ondruschka and A. Schlaefer},
            title = {Needle insertion planning for obstacle avoidance in robotic biopsy.},
            journal = {Current Directions in Biomedical Engineering},
            year = {2021},
            volume = {7.},
            number = {(2),},
            pages = {779--782},
            url = {https://doi.org/10.1515/cdbme-2021-2199},
            doi = {10.1515/cdbme-2021-2199}
          }
          
        • J. Krüger, A. C. Ostwaldt, L. Spies, B. Geisler, A. Schlaefer, H. H. Kitzler, S. Schippling, R. Opfer (2021), "Infratentorial lesions in multiple sclerosis patients: intra- and inter-rater variability in comparison to a fully automated segmentation using 3D convolutional neural networks." [Abstract] [BibTeX] [DOI] [URL]
        • Abstract: Automated quantification of infratentorial multiple sclerosis lesions on magnetic resonance imaging is clinically relevant but challenging. To overcome some of these problems, we propose a fully automated lesion segmentation algorithm using 3D convolutional neural networks (CNNs).
          BibTeX:
          @article{Krueger2021,
            author = {J. Krüger and A. C. Ostwaldt and L. Spies and B. Geisler and A. Schlaefer, and H. H. Kitzler and S. Schippling and R. Opfer},
            title = {Infratentorial lesions in multiple sclerosis patients: intra- and inter-rater variability in comparison to a fully automated segmentation using 3D convolutional neural networks.},
            year = {2021},
            url = {https://doi.org/10.1007/s00330-021-08329-3},
            doi = {10.1007/s00330-021-08329-3}
          }
          
        • S. Latus, J. Sprenger, M. Neidhardt, J. Schadler, A. Ron, A. Fitzek, M. Schluter, P. Breitfeld, A. Heinemann, K. Puschel, A. Schlaefer (2021), "Rupture detection during needle insertion using complex OCT data and CNNs", IEEE Transactions on Biomedical Engineering. ,1-1. [Abstract] [BibTeX] [DOI]
        • Abstract: Objective: Soft tissue deformation and ruptures complicate needle placement. However, ruptures at tissue inter- faces also contain information which helps physicians to navigate through different layers. This navigation task can be challenging, whenever ultrasound (US) image guidance is hard to align and externally sensed forces are superimposed by friction. Methods: We propose an experimental setup for reproducible needle insertions, applying optical coherence tomography (OCT) directly at the needle tip as well as external US and force measurements. Processing the complex OCT data is challenging as the penetration depth is limited and the data can be difficult to interpret. Using a machine learning approach, we show that ruptures can be detected in the complex OCT data without additional external guidance or measurements after training with multi-modal ground-truth from US and force. Results: We can detect ruptures with accuracies of 0.94 and 0.91 on homogeneous and inhomogeneous phantoms, respectively, and 0.71 for ex-situ tissues. Conclusion: We propose an experimental setup and deep learning based rupture detection for the complex OCT data in front of the needle tip, even in deeper tissue structures without the need for US or force sensor guiding. Significance: This study promises a suitable approach to comple- ment a robust robotic needle placement.
          BibTeX:
          @article{Latus2021,
            author = {S. Latus and J. Sprenger and M. Neidhardt and J. Schadler and A. Ron and A. Fitzek and M. Schluter and P. Breitfeld and A. Heinemann and K. Puschel and A. Schlaefer},
            title = {Rupture detection during needle insertion using complex OCT data and CNNs},
            journal = {IEEE Transactions on Biomedical Engineering},
            year = {2021},
            pages = {1-1},
            doi = {10.1109/TBME.2021.3063069}
          }
          
        • K. Linka, M. Hillgärtner, K. P. Abdolazizi, R. C. Aydin, M. Itskov, C. J. Cyron (2021), "Constitutive artificial neural networks: A fast and general approach to predictive data-driven constitutive modeling by deep learning.", Journal of Computational Physics. Vol. 429.,110010. [Abstract] [BibTeX] [DOI] [URL]
        • Abstract: In this paper we introduce constitutive artificial neural networks (CANNs), a novel machine learning architecture for data-driven modeling of the mechanical constitutive behavior of materials. CANNs are able to incorporate by their very design information from three different sources, namely stress-strain data, theoretical knowledge from materials theory, and diverse additional information (e.g., about microstructure or materials processing). CANNs can easily and efficiently be implemented in standard computational software. They require only a low-to-moderate amount of training data and training time to learn without human guidance the constitutive behavior also of complex nonlinear and anisotropic materials. Moreover, in a simple academic example we demonstrate how the input of microstructural data can endow CANNs with the ability to describe not only the behavior of known materials but to predict also the properties of new materials where no stress-strain data are available yet. This ability may be particularly useful for the future in-silico design of new materials. The developed source code of the CANN architecture and accompanying example data sets are available at https://github.com/ConstitutiveANN/CANN.
          BibTeX:
          @article{Linka2021,
            author = {K. Linka and M. Hillgärtner and K. P. Abdolazizi and R. C. Aydin and M. Itskov and C. J. Cyron},
            title = {Constitutive artificial neural networks: A fast and general approach to predictive data-driven constitutive modeling by deep learning.},
            journal = {Journal of Computational Physics},
            year = {2021},
            volume = {429.},
            pages = {110010},
            url = {https://www.sciencedirect.com/science/article/pii/S0021999120307841},
            doi = {10.1016/j.jcp.2020.110010}
          }
          
        • R. Mieling, J. Sprenger, S. Latus, L. Bargsten, A. Schlaefer (2021), "A novel optical needle probe for deep learning-based tissue elasticity characterization:", Current Directions in Biomedical Engineering. Vol. 7(1),21-25. [BibTeX] [DOI] [URL]
        • BibTeX:
          @article{Mieling2021,
            author = {R. Mieling and J. Sprenger and S. Latus and L. Bargsten and A. Schlaefer},
            title = {A novel optical needle probe for deep learning-based tissue elasticity characterization:},
            journal = {Current Directions in Biomedical Engineering},
            year = {2021},
            volume = {7},
            number = {1},
            pages = {21--25},
            url = {https://doi.org/10.1515/cdbme-2021-1005},
            doi = {10.1515/cdbme-2021-1005}
          }
          
        • M. Neidhardt, S. Gerlach, M.-H. Laves, S. Latus, C. Stapper, M. Gromniak, A. Schlaefer (2021), "Collaborative robot assisted smart needle placement.", Current Directions in Biomedical Engineering. Vol. 7.((2),),472-475. [Abstract] [BibTeX] [DOI] [URL]
        • Abstract: Needles are key tools to realize minimally invasive interventions. Physicians commonly rely on subjectively perceived insertion forces at the distal end of the needle when advancing the needle tip to the desired target. However, detecting tissue transitions at the distal end of the needle is difficult since the sensed forces are dominated by shaft forces. Disentangling insertion forces has the potential to substantially improve needle placement accuracy. We propose a
          collaborative system for robotic needle insertion, relaying haptic information sensed directly at the needle tip to the physician by haptic feedback through a light weight robot. We integrate optical fibers into medical needles and use optical coherence tomography to image a moving surface at the tip of the needle. Using a convolutional neural network, we estimate forces acting on the needle tip from the optical coherence tomography data. We feed back forces estimated at the needle tip for real time haptic feedback and robot control. When inserting the needle at constant velocity, the force change estimated at the tip when penetrating tissue layers is up to 94 % between
          deep tissue layers compared to the force change at the needle
          handle of 2.36 %. Collaborative needle insertion results in more
          sensible force change at tissue transitions with haptic feedback from the tip (49.79 ± 25.51) % compared to the conventional shaft feedback (15.17 ± 15.92) %. Tissue transitions are more prominent when utilizing forces estimated at the needle tip compared to the forces at the needle shaft, indicating that a more informed advancement of the needle is possible with our system
          BibTeX:
          @article{Neidhardt2021a,
            author = {M. Neidhardt and S. Gerlach and M.-H. Laves and S. Latus and C. Stapper and M. Gromniak and A. Schlaefer},
            title = {Collaborative robot assisted smart needle placement.},
            journal = {Current Directions in Biomedical Engineering},
            year = {2021},
            volume = {7.},
            number = {(2),},
            pages = {472--475},
            url = {https://doi.org/10.1515/cdbme-2021-2120},
            doi = {10.1515/cdbme-2021-2120}
          }
          
        • M. Neidhardt, J. Ohlsen, N. Hoffmann, A. Schlaefer (2021), "Parameter Identification for Ultrasound Shear Wave Elastography Simulation:", Current Directions in Biomedical Engineering. Vol. 7(1),35-38. [BibTeX] [DOI] [URL]
        • BibTeX:
          @article{Neidhardt2021,
            author = {M. Neidhardt and J. Ohlsen and N. Hoffmann and A. Schlaefer},
            title = {Parameter Identification for Ultrasound Shear Wave Elastography Simulation:},
            journal = {Current Directions in Biomedical Engineering},
            year = {2021},
            volume = {7},
            number = {1},
            pages = {35--38},
            url = {https://doi.org/10.1515/cdbme-2021-1008},
            doi = {10.1515/cdbme-2021-1008}
          }
          
        • J. Ohlsen, M. Neidhardt, A. Schlaefer, N. Hoffmann (2021), "Modelling shear wave propagation in soft tissue surrogates using a finite element- and finite difference method", PAMM. Vol. 20(1),e202000148. [Abstract] [BibTeX] [DOI] [URL]
        • Abstract: Abstract Shear Wave Elasticity Imaging (SWEI) has become a popular medical imaging technique [1] in which soft tissue is excited by the acoustic radiation forces of a focused ultrasonic beam. Tissue stiffness can then be derived from measurements of shear wave propagation speeds [2]. The main objective of this work is a comparison of a finite element (FEM) and a finite difference method (FDM) in terms of their computational efficiency when modeling shear wave propagation in tissue phantoms. Moreover, the propagation of shear waves is examined in experiments with ballistic gelatin to assess the simulation results. In comparison to the FEM the investigated FDM proves to be significantly more performant for this computing task
          BibTeX:
          @article{Ohlsen2021,
            author = {J. Ohlsen and M. Neidhardt and A. Schlaefer and N. Hoffmann},
            title = {Modelling shear wave propagation in soft tissue surrogates using a finite element- and finite difference method},
            journal = {PAMM},
            year = {2021},
            volume = {20},
            number = {1},
            pages = {e202000148},
            url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/pamm.202000148},
            doi = {10.1002/pamm.202000148}
          }
          
        • S. Lehmann, A. Rogalla, M. Neidhardt, S. A. Schlaefer, Schupp (2021), "Online Strategy Synthesis for Safe and Optimized Control of Steerable Needles.", Electronic Proceedings in Theoretical Computer Science., Oct, 2021. Vol. 348.,128-135. Open Publishing Association:. [Abstract] [BibTeX] [DOI] [URL]
        • Abstract: Autonomous systems are often applied in uncertain environments, which require prospective action planning and retrospective data evaluation for future planning to ensure safe operation. Formal approaches may support these systems with safety guarantees, but are usually expensive and do not scale well with growing system complexity. In this paper, we introduce online strategy synthesis based on classical strategy synthesis to derive formal safety guarantees while reacting and adapting to environment changes. To guarantee safety online, we split the environment into region types which determine the acceptance of action plans and trigger local correcting actions. Using model checking on a frequently updated model, we can then derive locally safe action plans (prospectively), and match the current model against new observations via reachability checks (retrospectively). As use case, we successfully apply online strategy synthesis to medical needle steering, i.e., navigating a (flexible and beveled) needle through tissue towards a target without damaging its surroundings.
          BibTeX:
          @article{Lehmann2021,
            author = {S. Lehmann, and A. Rogalla and M. Neidhardt and A. Schlaefer S. and Schupp},
            title = {Online Strategy Synthesis for Safe and Optimized Control of Steerable Needles.},
            journal = {Electronic Proceedings in Theoretical Computer Science},
            publisher = {Open Publishing Association:},
            year = {2021},
            volume = {348.},
            pages = {128-135},
            url = {http://dx.doi.org/10.4204/EPTCS.348.9},
            doi = {10.4204/eptcs.348.9}
          }
          
        • M. Schlüter (2021), "Analysis of ultrasound and optical coherence tomography for markerless volumetric image guidance in robotic radiosurgery.". Thesis at: Technische Universität Hamburg. [Abstract] [BibTeX] [DOI] [URL]
        • Abstract: An accurate dose delivery in radiosurgery requires to reliably detect and compensate any motion of the target during the treatment. In this thesis, we study approaches for markerless volumetric image guidance. For abdominal targets, we analyze and optimize the impact of robotic transabdominal ultrasound imaging. For cranial targets, we describe a novel setup using optical coherence tomography.
          BibTeX:
          @phdthesis{Schlueter2021,
            author = {M. Schlüter},
            title = {Analysis of ultrasound and optical coherence tomography for markerless volumetric image guidance in robotic radiosurgery.},
            school = {Technische Universität Hamburg},
            year = {2021},
            url = {http://hdl.handle.net/11420/10432},
            doi = {10.15480/882.3798}
          }
          
        • F. N. Schmidt, S. Gerlach, M. Issleib, A. S. andB. Busse (2021), "Development of a virtual reality-based training for the elderly with increased fracture risk to prevent falls and improve their balance", Bone Reports. Vol. 14,100950. [BibTeX] [DOI] [URL]
        • BibTeX:
          @article{Schmidt2021,
            author = {F. N. Schmidt, S. Gerlach, M. Issleib, A. Schlaefer,B. Busse},
            title = {Development of a virtual reality-based training for the elderly with increased fracture risk to prevent falls and improve their balance},
            journal = {Bone Reports},
            year = {2021},
            volume = {14},
            pages = {100950},
            note = {Abstracts of the ECTS Congress 2021},
            url = {https://www.sciencedirect.com/science/article/pii/S2352187221002059},
            doi = {10.1016/j.bonr.2021.100950}
          }
          
        • J. Sprenger, M. Neidhardt, M. Schlüter, S. Latus, T. Gosau, J. Kemmling, S. Feldhaus, U. Schumacher, A. Schlaefer (2021), "In-vivo markerless motion detection from volumetric optical coherence tomography data using CNNs", In Medical Imaging 2021: Image-Guided Procedures, Robotic Interventions, and Modeling. Vol. 11598,345 - 350. SPIE. [Abstract] [BibTeX] [DOI] [URL]
        • Abstract: Precise navigation is an important task in robot-assisted and minimally invasive surgery. The need for optical markers and a lack of distinct anatomical features on skin or organs complicate tissue tracking with commercial tracking systems. Previous work has shown the feasibility of a 3D optical coherence tomography based system for this purpose. Furthermore, convolutional neural networks have been proven to precisely detect shifts between volumes. However, most experiments have been performed with phantoms or ex-vivo tissue. We introduce an experimental setup and perform measurements on perfused and non-perfused (dead) tissue of in-vivo xenograft tumors. We train 3D siamese deep learning models and evaluate the precision of the motion prediction. The network's ability to predict shifts for different motion magnitudes and also the performance for the different volume axes are compared. The root-mean-square errors are 0:12mm and 0:08mm on perfused and non-perfused tumor tissue, respectively.
          BibTeX:
          @inproceedings{Sprenger2021,
            author = {J. Sprenger, M. Neidhardt, M. Schlüter, S. Latus, T. Gosau, J. Kemmling, S. Feldhaus, U. Schumacher, A. Schlaefer},
            editor = {Cristian A. Linte and Jeffrey H. Siewerdsen},
            title = {In-vivo markerless motion detection from volumetric optical coherence tomography data using CNNs},
            booktitle = {Medical Imaging 2021: Image-Guided Procedures, Robotic Interventions, and Modeling},
            publisher = {SPIE},
            year = {2021},
            volume = {11598},
            pages = {345 -- 350},
            url = {https://doi.org/10.1117/12.2581023},
            doi = {10.1117/12.2581023}
          }
          
        • J. Sprenger, J. Petersen, N. Neumann, H. Reichenspurner, D. Russ, C. Detter, A. Schlaefer (2021), "Tracking heart surface features to determine myocardial contrast agent enrichment:", Current Directions in Biomedical Engineering. Vol. 7(1),53-57. [BibTeX] [DOI] [URL]
        • BibTeX:
          @article{Sprenger2021b,
            author = {J. Sprenger and J. Petersen and N. Neumann and H. Reichenspurner and D. Russ and C. Detter and A. Schlaefer},
            title = {Tracking heart surface features to determine myocardial contrast agent enrichment:},
            journal = {Current Directions in Biomedical Engineering},
            year = {2021},
            volume = {7},
            number = {1},
            pages = {53--57},
            url = {https://doi.org/10.1515/cdbme-2021-1012},
            doi = {10.1515/cdbme-2021-1012}
          }
          
        • J. Sprenger, T. Saathoff, A. Schlaefer (2021), "Automated robotic surface scanning with optical coherence tomography", In IEEE International Symposium on Biomedical Imaging. ,1137-1140.. [Abstract] [BibTeX]
        • Abstract: Optical coherence tomography (OCT) is a near-infrared light based imaging modality that enables depth scans with a high spatial resolution. By scanning along the lateral dimensions, high-resolution volumes can be acquired. This allows to characterize tissue and precisely detect abnormal structures in medical scenarios. However, its small field of view (FOV) limits the applicability of OCT for medical examinations. We therefore present an automated setup to move an OCT scan head over arbitrary surfaces. By mounting the scan head to a highly accurate robot arm, we obtain precise information about the position of the acquired volumes. We implement a geometric approach to stitch the volumes and generate the surface scans. Our results show that a precise stitching of the volumes is achieved with mean absolute errors of 0078mm and 0098mm in the lateral directions and 0037mm in the axial direction. We can show that our setup provides automated surface scanning with OCT of samples and phantoms larger than the usual FOV
          BibTeX:
          @conference{Sprenger2021a,
            author = {J. Sprenger and T. Saathoff and A. Schlaefer},
            title = {Automated robotic surface scanning with optical coherence tomography},
            booktitle = {IEEE International Symposium on Biomedical Imaging},
            year = {2021},
            pages = {1137-1140.}
          }
          
        • M. Tölle, M.-H. Laves, A. Schlaefer (2021), "A mean-field variational inference approach to deep image prior for inverse problems in medical imaging" Vol. 143 N.N.. [Abstract] [BibTeX] [URL]
        • Abstract: Exploiting the deep image prior property of convolutional auto-encoder networks is especially interesting for medical image processing as it avoids hallucinations by omitting supervised learning. Its spectral bias towards lower frequencies makes it suitable for inverse image problems such as denoising and super-resolution, but manual early stopping has to be applied to act as a low-pass filter. In this paper, we present a novel Bayesian approach to deep image prior using mean-field variational inference. This allows for uncertainty quantification on a per-pixel level and, given the right prior distribution on the network weights, omits the need for early stopping. We optimize the parameters of the weight prior towards reconstruction accuracy using Bayesian optimization with Gaussian Process regression. We evaluate our approach on different inverse tasks on a variety of modalities and demonstrate that an optimized weight prior outperforms former state-of-the-art Bayesian deep image prior approaches. We show that a badly selected prior leads to worse accuracy and calibration and that it is sufficient to optimize the weight prior parameter per task domain.
          BibTeX:
          @inproceedings{Tölle_Laves_Schlaefer_2021,
            author = {Tölle, Malte and Laves, Max-Heinrich and Schlaefer, Alexander},
            title = {A mean-field variational inference approach to deep image prior for inverse problems in medical imaging},
            publisher = {N.N.},
            year = {2021},
            volume = {143},
            url = {https://hdl.handle.net/11420/46793}
          }
          

          2020

          • L. Bargsten A. Schlaefer (2020), "SpeckleGAN: a generative adversarial network with an adaptive speckle layer to augment limited training data for ultrasound image processing", International Journal of Computer Assisted Radiology and Surgery., In International Journal of Computer Assisted Radiology and Surgery., Sept, 2020. Vol. 15(9),1427-1436. [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: In the field of medical image analysis, deep learning methods gained huge attention over the last years. This can be explained by their often improved performance compared to classic explicit algorithms. In order to work well, they need large amounts of annotated data for supervised learning, but these are often not available in the case of medical image data. One way to overcome this limitation is to generate synthetic training data, e.g., by performing simulations to artificially augment the dataset. However, simulations require domain knowledge and are limited by the complexity of the underlying physical model. Another method to perform data augmentation is the generation of images by means of neural networks
            BibTeX:
            @article{Bargsten2020,
              author = {L. Bargsten AND A. Schlaefer},
              title = {SpeckleGAN: a generative adversarial network with an adaptive speckle layer to augment limited training data for ultrasound image processing},
              booktitle = {International Journal of Computer Assisted Radiology and Surgery},
              journal = {International Journal of Computer Assisted Radiology and Surgery},
              year = {2020},
              volume = {15},
              number = {9},
              pages = {1427-1436},
              url = {https://doi.org/10.1007/s11548-020-02203-1},
              doi = {10.1007/s11548-020-02203-1}
            }
            
          • F. Behrendt, N. Gessert, A. Schlaefer (2020), "Generalization of spatio-temporal deep learning for vision-based force estimation", Current Directions in Biomedical Engineering. Berlin, Boston, May, 2020. Vol. 6(1),20200024. De Gruyter. [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: Robot-assisted minimally-invasive surgery is increasingly used in clinical practice. Force feedback offers potential to develop haptic feedback for surgery systems. Forces can be estimated in a vision-based way by capturing deformation observed in 2D-image sequences with deep learning models. Variations in tissue appearance and mechanical properties likely influence force estimation methods’ generalization. In this work, we study the generalization capabilities of different spatial and spatio-temporal deep learning methods across different tissue samples. We acquire several data-sets using a clinical laparoscope and use both purely spatial and also spatio-temporal deep learning models. The results of this work show that generalization across different tissues is challenging. Nevertheless, we demonstrate that using spatio-temporal data instead of individual frames is valuable for force estimation. In particular, processing spatial and temporal data separately by a combination of a ResNet and GRU architecture shows promising results with a mean absolute error of 15.450 compared to 19.744 mN of a purely spatial CNN
            BibTeX:
            @article{Behrendt2020,
              author = {F. Behrendt and N. Gessert and A. Schlaefer},
              title = {Generalization of spatio-temporal deep learning for vision-based force estimation},
              journal = {Current Directions in Biomedical Engineering},
              publisher = {De Gruyter},
              year = {2020},
              volume = {6},
              number = {1},
              pages = {20200024},
              url = {https://www.degruyter.com/view/journals/cdbme/6/1/article-20200024.xml},
              doi = {10.1515/cdbme-2020-0024}
            }
            
          • M. Bengs, N. Gessert, W. Laffers, D. Eggert, S. Westermann, N. Mueller, A. Gerstners, C. Betz, A. Schlaefer (2020), "Spectral-spatial Recurrent-Convolutional Networks for In-Vivo Hyperspectral Tumor Type Classification", In Medical Image Computing and Computer Assisted Intervention - MICCAI 2020. Cham ,690-699. Springer International Publishing. [Abstract] [BibTeX] [URL]
          • Abstract: Early detection of cancerous tissue is crucial for long-term patient survival. In the head and neck region, a typical diagnostic procedure is an endoscopic intervention where a medical expert manually assesses tissue using RGB camera images. While healthy and tumor regions are generally easier to distinguish, differentiating benign and malignant tumors is very challenging. This requires an invasive biopsy, followed by histological evaluation for diagnosis. Also, during tumor resection, tumor margins need to be verified by histological analysis. To avoid unnecessary tissue resection, a non-invasive, image-based diagnostic tool would be very valuable. Recently, hyperspectral imaging paired with deep learning has been proposed for this task, demonstrating promising results on ex-vivo specimens. In this work, we demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning. We analyze the value of using multiple hyperspectral bands compared to conventional RGB images and we study several machine learning models ability to make use of the additional spectral information. Based on our insights, we address spectral and spatial processing using recurrent-convolutional models for effective spectral aggregating and spatial feature learning. Our best model achieves an AUC of 76.3% significantly outperforming previous conventional and deep learning methods
            BibTeX:
            @inproceedings{Bengs2020c,
              author = {M. Bengs and N. Gessert and W. Laffers and D. Eggert and S. Westermann and N.A. Mueller and A.O.H. Gerstners and C. Betz and A. Schlaefer},
              title = {Spectral-spatial Recurrent-Convolutional Networks for In-Vivo Hyperspectral Tumor Type Classification},
              booktitle = {Medical Image Computing and Computer Assisted Intervention - MICCAI 2020},
              publisher = {Springer International Publishing},
              year = {2020},
              pages = {690-699},
              url = {http://link-springer-com-443.webvpn.fjmu.edu.cn/chapter/10.1007/978-3-030-59716-0_66}
            }
            
          • M. Bengs, N. Gessert, A. Schlaefer (2020), "4D Spatio-Temporal Deep Learning with 4D fMRI Data for Autism Spectrum Disorder Classification" arXiv. [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: Autism spectrum disorder (ASD) is associated with behavioral and communication problems. Often, functional magnetic resonance imaging (fMRI) is used to detect and characterize brain changes related to the disorder. Recently, machine learning methods have
            been employed to reveal new patterns by trying to classify ASD from spatio-temporal fMRI
            images. Typically, these methods have either focused on temporal or spatial information
            processing. Instead, we propose a 4D spatio-temporal deep learning approach for ASD
            classification where we jointly learn from spatial and temporal data. We employ 4D convolutional neural networks and convolutional-recurrent models which outperform a previous
            approach with an F1-score of 0.71 compared to an F1-score of 0.65.
            BibTeX:
            @misc{Bengs2020e,
              author = {Bengs, Marcel and Gessert, Nils and Schlaefer, Alexander},
              title = {4D Spatio-Temporal Deep Learning with 4D fMRI Data for Autism Spectrum Disorder Classification},
              publisher = {arXiv},
              year = {2020},
              url = {https://arxiv.org/abs/2004.10165},
              doi = {10.48550/ARXIV.2004.10165}
            }
            
          • M. Bengs, N. Gessert, A. Schlaefer (2020), "A Deep Learning Approach for Motion Forecasting Using 4D OCT Data", In International Conference on Medical Imaging with Deep Learning. [Abstract] [BibTeX] [URL]
          • Abstract: Forecasting motion of a specific target object is a common problem for surgical interventions, e.g. for localization of a target region, guidance for surgical interventions, or motion compensation. Optical coherence tomography (OCT) is an imaging modality with a high spatial and temporal resolution. Recently, deep learning methods have shown promising performance for OCT-based motion estimation based on two volumetric images. We extend this approach and investigate whether using a time series of volumes enables motion forecasting. We propose 4D spatio-temporal deep learning for end-to-end motion forecasting and estimation using a stream of OCT volumes. We design and evaluate five different 3D and 4D deep learning methods using a tissue data set. Our best performing 4D method achieves motion forecasting with an overall average correlation coefficient of 97.41%, while also improving motion estimation performance by a factor of 2.5 compared to a previous 3D approach.
            BibTeX:
            @inproceedings{Bengs2020a,
              author = {M. Bengs and N. Gessert and A. Schlaefer},
              title = {A Deep Learning Approach for Motion Forecasting Using 4D OCT Data},
              booktitle = {International Conference on Medical Imaging with Deep Learning},
              year = {2020},
              url = {https://arxiv.org/abs/2004.10121}
            }
            
          • M. Bengs, N. Gessert, M. Schlüter, A. Schlaefer (2020), "Spatio-Temporal Deep Learning Methods for Motion Estimation Using 4D OCT Image Data", International Journal of Computer Assisted Radiology and Surgery., In International Journal of Computer Assisted Radiology and Surgery., Jun, 2020. Vol. 15(6),943-952. [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: Purpose. Localizing structures and estimating the motion of a specific target region are common problems for navigation during surgical interventions. Optical coherence tomography (OCT) is an imaging modality with a high spatial and temporal resolution that has been used for intraoperative imaging and also for motion estimation, for example, in the context of ophthalmic surgery or cochleostomy. Recently, motion estimation between a template and a moving OCT image has been studied with deep learning methods to overcome the shortcomings of conventional, feature-based methods. Methods. We investigate whether using a temporal stream of OCT image volumes can improve deep learning-based motion estimation performance. For this purpose, we design and evaluate several 3D and 4D deep learning methods and we propose a new deep learning approach. Also, we propose a temporal regularization strategy at the model output. Results. Using a tissue dataset without additional markers, our deep learning methods using 4D data outperform previous approaches. The best performing 4D architecture achieves an correlation coefficient (aCC) of 98.58% compared to 85.0% of a previous 3D deep learning method. Also, our temporal regularization strategy at the output further improves 4D model performance to an aCC of 99.06%. In particular, our 4D method works well for larger motion and is robust towards image rotations and motion distortions. Conclusions. We propose 4D spatio-temporal deep learning for OCT-based motion estimation. On a tissue dataset, we find that using 4D information for the model input improves performance while maintaining reasonable inference times. Our regularization strategy demonstrates that additional temporal information is also beneficial at the model output.
            BibTeX:
            @article{Bengs2020b,
              author = {M. Bengs and N. Gessert and M. Schlüter and A. Schlaefer},
              title = {Spatio-Temporal Deep Learning Methods for Motion Estimation Using 4D OCT Image Data},
              booktitle = {International Journal of Computer Assisted Radiology and Surgery},
              journal = {International Journal of Computer Assisted Radiology and Surgery},
              year = {2020},
              volume = {15},
              number = {6},
              pages = {943-952},
              url = {https://arxiv.org/abs/2004.10114},
              doi = {10.1007/s11548-020-02178-z}
            }
            
          • M. Bengs, T. Gessert, A. Schlaefer (2020), "4D spatio-temporal convolutional networks for object position estimation in OCT volumes", Current directions in biomedical engineering. Vol. 6(1),20200001. [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: Tracking and localizing objects is a central problem in computer-assisted surgery. Optical coherence tomography (OCT) can be employed as an optical tracking system, due to its high spatial and temporal resolution. Recently, 3D convolutional neural networks (CNNs) have shown promising performance for pose estimation of a marker object using single volumetric OCT images. While this approach relied on spatial information only, OCT allows for a temporal stream of OCT image volumes capturing the motion of an object at high volumes rates. In this work, we systematically extend 3D CNNs to 4D spatio-temporal CNNs to evaluate the impact of additional temporal information for marker object tracking. Across various architectures, our results demonstrate that using a stream of OCT volumes and employing 4D spatio-temporal convolutions leads to a 30% lower mean absolute error compared to single volume processing with 3D CNNs
            BibTeX:
            @article{Bengs2020d,
              author = {M. Bengs and T. Gessert and A. Schlaefer},
              title = {4D spatio-temporal convolutional networks for object position estimation in OCT volumes},
              journal = {Current directions in biomedical engineering},
              year = {2020},
              volume = {6},
              number = {1},
              pages = {20200001},
              url = {http://hdl.handle.net/11420/7722},
              doi = {10.15480/882.3036}
            }
            
          • M. Bengs, S. Westermann, N. Gessert, D. Eggert, N. A. M. A. O. H. Gerstner, C. Betz, W. Laffers, A. Schlaefer (2020), "Spatio-spectral deep learning methods for in-vivohyperspectral laryngeal cancer detection", SPIE Medical Imaging 2020: Computer-Aided Diagnosis., In SPIE Medical Imaging 2020: Computer-Aided Diagnosis. ,in print. [BibTeX]
          • BibTeX:
            @conference{Bengs2020,
              author = {M. Bengs and S. Westermann and N. Gessert and D. Eggert and A. O. H. Gerstner, N. A. Mueller and C. Betz and W. Laffers and A. Schlaefer},
              title = {Spatio-spectral deep learning methods for in-vivohyperspectral laryngeal cancer detection},
              booktitle = {SPIE Medical Imaging 2020: Computer-Aided Diagnosis},
              journal = {SPIE Medical Imaging 2020: Computer-Aided Diagnosis},
              year = {2020},
              pages = {in print}
            }
            
          • M. Bengs, S. Westermann, N. Gessert, D. Eggert, A. O. H. Gerstner, N. A. Mueller, C. Betz, W. Laffers, A. Schlaefer (2020), "Spatio-spectral deep learning methods for in-vivo hyperspectral laryngeal cancer detection", In Medical Imaging 2020: Computer-Aided Diagnosis. Vol. 11314,113141L. SPIE. [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: Early detection of head and neck tumors is crucial for patient survival. Often, diagnoses are made based on endoscopic examination of the larynx followed by biopsy and histological analysis, leading to a high interobserver variability due to subjective assessment. In this regard, early non-invasive diagnostics independent of the clinician would be a valuable tool. A recent study has shown that hyperspectral imaging (HSI) can be used for non-invasive detection of head and neck tumors, as precancerous or cancerous lesions show specific spectral signatures that distinguish them from healthy tissue. However, HSI data processing is challenging due to high spectral variations, various image interferences, and the high dimensionality of the data. Therefore, performance of automatic HSI analysis has been limited and so far, mostly ex-vivo studies have been presented with deep learning. In this work, we analyze deep learning techniques for in-vivo hyperspectral laryngeal cancer detection. For this purpose we design and evaluate convolutional neural networks (CNNs) with 2D spatial or 3D spatio-spectral convolutions combined with a state-of-the-art Densenet architecture. For evaluation, we use an in-vivo data set with HSI of the oral cavity or oropharynx. Overall, we present multiple deep learning techniques for in-vivo laryngeal cancer detection based on HSI and we show that jointly learning from the spatial and spectral domain improves classification accuracy notably. Our 3D spatio-spectral Densenet achieves an average accuracy of 81%.
            BibTeX:
            @inproceedings{Bengs2020f,
              author = {Marcel Bengs and Stephan Westermann and Nils Gessert and Dennis Eggert and Andreas O. H. Gerstner and Nina A. Mueller and Christian Betz and Wiebke Laffers and Alexander Schlaefer},
              editor = {Horst K. Hahn and Maciej A. Mazurowski},
              title = {Spatio-spectral deep learning methods for in-vivo hyperspectral laryngeal cancer detection},
              booktitle = {Medical Imaging 2020: Computer-Aided Diagnosis},
              publisher = {SPIE},
              year = {2020},
              volume = {11314},
              pages = {113141L},
              url = {https://doi.org/10.1117/12.2549251},
              doi = {10.1117/12.2549251}
            }
            
          • D. Ellebrecht, S. Latus, A. Schlaefer, T. Keck, N. Gessert (2020), "Towards an Optical Biopsy during Visceral Surgical Interventions", Visceral Medicine., In Visceral Medicine. [Abstract] [BibTeX] [DOI]
          • Abstract: Cancer will replace cardiovascular diseases as the most frequent cause of death. Therefore, the goals of cancer treatment are prevention strategies and early detection by cancer screening and ideal stage therapy. From an oncological point of view, complete tumor resection is a significant prognostic factor. Optical coherence tomography (OCT) and confocal laser microscopy (CLM) are two techniques that have the potential to complement intraoperative frozen section analysis as in vivo and real-time optical biopsies. Summary: In this review we present both procedures and review the progress of evaluation for intraoperative application in visceral surgery. For visceral surgery, there are promising studies evaluating OCT and CLM; however, application during routine visceral surgical interventions is still lacking. Key Message: OCT and CLM are not competing but complementary approaches of tissue analysis to intraoperative frozen section analysis. Although intraoperative application of OCT and CLM is at an early stage, they are two promising techniques of intraoperative in vivo and real-time tissue examination. Additionally, deep learning strategies provide a significant supplement for automated tissue detection
            BibTeX:
            @article{Ellebrecht2020,
              author = {D.B. Ellebrecht and S. Latus and A. Schlaefer and T. Keck and N. Gessert},
              title = {Towards an Optical Biopsy during Visceral Surgical Interventions},
              booktitle = {Visceral Medicine},
              journal = {Visceral Medicine},
              year = {2020},
              doi = {10.1159/000505938}
            }
            
          • S. Gerlach, C. Fürweger, T. Hofmann, A. Schlaefer (2020), "Feasibility and analysis of CNN-based candidate beam generation for robotic radiosurgery", Medical Physics. Vol. 47(9),3806-3815. [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: Purpose Robotic radiosurgery offers the flexibility of a robotic arm to enable high conformity to the target and a steep dose gradient. However, treatment planning becomes a computationally challenging task as the search space for potential beam directions for dose delivery is arbitrarily large. We propose an approach based on deep learning to improve the search for treatment beams. Methods In clinical practice, a set of candidate beams generated by a randomized heuristic forms the basis for treatment planning. We use a convolutional neural network to identify promising candidate beams. Using radiological features of the patient, we predict the influence of a candidate beam on the delivered dose individually and let this prediction guide the selection of candidate beams. Features are represented as projections of the organ structures which are relevant during planning. Solutions to the inverse planning problem are generated for random and CNN-predicted candidate beams. Results The coverage increases from 95.35% to 97.67% for 6000 heuristically and CNN-generated candidate beams, respectively. Conversely, a similar coverage can be achieved for treatment plans with half the number of candidate beams. This results in a patient-dependent reduced averaged computation time of 20.28%–45.69%. The number of active treatment beams can be reduced by 11.35% on average, which reduces treatment time. Constraining the maximum number of candidate beams per beam node can further improve the average coverage by 0.75 percentage points for 6000 candidate beams. Conclusions We show that deep learning based on radiological features can substantially improve treatment plan quality, reduce computation runtime, and treatment time compared to the heuristic approach used in clinics
            BibTeX:
            @article{Gerlach2020a,
              author = {S. Gerlach and C. Fürweger and T. Hofmann and A. Schlaefer},
              title = {Feasibility and analysis of CNN-based candidate beam generation for robotic radiosurgery},
              journal = {Medical Physics},
              year = {2020},
              volume = {47},
              number = {9},
              pages = {3806-3815},
              url = {https://aapm.onlinelibrary.wiley.com/doi/abs/10.1002/mp.14331},
              doi = {10.1002/mp.14331}
            }
            
          • S. Gerlach, C. Fürweger, T. Hofmann, A. Schlaefer (2020), "Multicriterial CNN based beam generation for robotic radiosurgery of the prostate", Current Directions in Biomedical Engineering. Berlin, Boston, may, 2020. Vol. 6(1),20200030. De Gruyter. [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: Although robotic radiosurgery offers a flexible arrangement of treatment beams, generating treatment plans is computationally challenging and a time consuming process for the planner. Furthermore, different clinical goals have to be considered during planning and generally different sets of beams correspond to different clinical goals. Typically, candidate beams sampled from a randomized heuristic form the basis for treatment planning. We propose a new approach to generate candidate beams based on deep learning using radiological features as well as the desired constraints. We demonstrate that candidate beams generated for specific clinical goals can improve treatment plan quality. Furthermore, we compare two approaches to include information about constraints in the prediction. Our results show that CNN generated beams can improve treatment plan quality for different clinical goals, increasing coverage from 91.2 to 96.8% for 3,000 candidate beams on average. When including the clinical goal in the training, coverage is improved by 1.1% points
            BibTeX:
            @article{Gerlach2020,
              author = {S. Gerlach and C. Fürweger and T. Hofmann and A. Schlaefer},
              title = {Multicriterial CNN based beam generation for robotic radiosurgery of the prostate},
              journal = {Current Directions in Biomedical Engineering},
              publisher = {De Gruyter},
              year = {2020},
              volume = {6},
              number = {1},
              pages = {20200030},
              url = {https://www.degruyter.com/view/journals/cdbme/6/1/article-20200030.xml},
              doi = {10.1515/cdbme-2020-0030}
            }
            
          • S. Gerlach, F. Siebert, A. Schlaefer (2020), "BReP-SNAP-T-54: Efficient Stochastic Optimization Accounting for Uncertainty in HDR Prostate Brachytherapy Needle Placement", Medical Physics. Vol. 47(6),e458. [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: Purpose: Uncertainty due to tissue deformation affects treatment planningfor HDR prostate brachytherapy. Hence, position and orientation of the nee-dles are typically not optimized in inverse planning. Stochastic linear pro-gramming (SLP) has been proposed to consider uncertainty duringoptimization. Conventionally, it draws samples from a probability distribu-tion but increases the problem size substantially. We propose an efficientscheme allowing for fast identification of robust needle configurations. Methods: We account for uncertainty along the needle axis by deformingthe target using B-Spline interpolation and a random displacement of thevoxel at the needle tip. Conventional SLP adds constraints for each sample.The new weighted SLP (WSLP) scheme first creates all spatial distributionsand then establishes one discretized optimization problem where weights inthe objective function represent the likelihood of voxels falling into grid ele-ments. Both approaches and the original deterministic problem are comparedon a set of 5 patient cases. Moreover, we use WSLP on a large set of ran-domly generated needles to select a robust subset of needles. Evaluations aredone on 100 independently sampled deformations. Results: Depending onthe deformation and needle count, SLP and WSLP improve the target cover-age by 1.5 to 10.9 percentage points compared to deterministic optimization.There is no significant difference in target coverage between plans for SLPand WSLP (p = 0.98) but WLSP is substantially more efficient, taking belowten seconds instead of more than four hours when considering 100 sampleddeformations. Using WSLP to identify robust needle configurations, cover-age can be improved 0.7 to 3.3 percentage points over the most promisingneedle configurations identified by deterministic optimization. Conclusion: WSLP allows for fast optimization considering a dense sample of possibledeformations. Using WSLP, it is feasible to realize inverse planning incorpo-rating uncertainty in needle placement and to identify robust needle sets
            BibTeX:
            @article{Gerlach2020b,
              author = {S. Gerlach and F. Siebert and A. Schlaefer},
              title = {BReP-SNAP-T-54: Efficient Stochastic Optimization Accounting for Uncertainty in HDR Prostate Brachytherapy Needle Placement},
              journal = {Medical Physics},
              year = {2020},
              volume = {47},
              number = {6},
              pages = {e458},
              url = {https://aapm.onlinelibrary.wiley.com/doi/abs/10.1002/mp.14316},
              doi = {10.1002/mp.14316}
            }
            
          • N. Gessert, M. Bengs, J. Krüger, R. Opfer, A.-C. Ostwaldt, P. Manogaran, S. Schippling, A. Schlaefer (2020), "4D Deep Learning for Multiple-Sclerosis Lesion Activity Segmentation", In Medical Imaging with Deep Learning. [Abstract] [BibTeX] [URL]
          • Abstract: Multiple sclerosis lesion activity segmentation is the task of detecting new and enlarging lesions that appeared between a baseline and a follow-up brain MRI scan. While deep learning methods for single-scan lesion segmentation are common, deep learning approaches for lesion activity have only been proposed recently. Here, a two-path architecture processes two 3D MRI volumes from two time points. In this work, we investigate whether extending this problem to full 4D deep learning using a history of MRI volumes and thus an extended baseline can improve performance. For this purpose, we design a recurrent multi-encoder-decoder architecture for processing 4D data. We find that adding more temporal information is beneficial and our proposed architecture outperforms previous approaches with a lesion-wise true positive rate of 0.84 at a lesion-wise false positive rate of 0.19.
            BibTeX:
            @inproceedings{Gessert2020c,
              author = {N. Gessert and M. Bengs and J. Krüger and R. Opfer and A.-C. Ostwaldt and P. Manogaran and S. Schippling and A. Schlaefer},
              title = {4D Deep Learning for Multiple-Sclerosis Lesion Activity Segmentation},
              booktitle = {Medical Imaging with Deep Learning},
              year = {2020},
              url = {https://openreview.net/forum?id=sMsAIWBSvg}
            }
            
          • N. Gessert, M. Bengs, A. Schlaefer (2020), "Melanoma detection with electrical impedance spectroscopy and dermoscopy using joint deep learning models", In SPIE Medical Imaging 2020. ,in print. [Abstract] [BibTeX] [URL]
          • Abstract: The initial assessment of skin lesions is typically based on dermoscopic images. As this is a difficult and timeconsuming task, machine learning methods using dermoscopic images have been proposed to assist human experts. Other approaches have studied electrical impedance spectroscopy (EIS) as a basis for clinical decision support systems. Both methods represent different ways of measuring skin lesion properties as dermoscopy relies on visible light and EIS uses electric currents. Thus, the two methods might carry complementary featuThe initial assessment of skin lesions is typically based on dermoscopic images. As this is a difficult and timeconsuming task, machine learning methods using dermoscopic images have been proposed to assist human experts. Other approaches have studied electrical impedance spectroscopy (EIS) as a basis for clinical decision support systems. Both methods represent different ways of measuring skin lesion properties as dermoscopy relies on visible light and EIS uses electric currents. Thus, the two methods might carry complementary features for lesion classification. Therefore, we propose joint deep learning models considering both EIS and dermoscopy for melanoma detection. For this purpose, we first study machine learning methods for EIS that incorporate domain knowledge and previously used heuristics into the design process. As a result, we propose a recurrent model with state-max-pooling which automatically learns the relevance of different EIS measurements. Second, we combine this new model with different convolutional neural networks that process dermoscopic images. We study ensembling approaches and also propose a cross-attention module guiding information exchange between the EIS and dermoscopy model. In general, combinations of EIS and dermoscopy clearly outperform models that only use either EIS or dermoscopy. We show that our attention-based, combined model outperforms other models with specificities of 34.4 % (CI 31.3-38.4), 34.7 % (CI 31.0-38.8) and 53.7 % (CI 50.1-57.6) for dermoscopy, EIS and the combined model, respectively, at a clinically relevant sensitivity of 98 %
            BibTeX:
            @inproceedings{Gessert2020b,
              author = {N. Gessert and M. Bengs and A. Schlaefer},
              title = {Melanoma detection with electrical impedance spectroscopy and dermoscopy using joint deep learning models},
              booktitle = {SPIE Medical Imaging 2020},
              year = {2020},
              pages = {in print},
              url = {https://arxiv.org/abs/1911.02322v2}
            }
            
          • N. Gessert, M. Bengs, M. Schlüter, A. Schlaefer (2020), "Deep learning with 4D spatio-temporal data representations for OCT-based force estimation", Medical Image Analysis., In Medical Image Analysis. Vol. 64(101730) [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: Estimating the forces acting between instruments and tissue is a challenging problem for robot-assisted minimally-invasive surgery. Recently, numerous vision-based methods have been proposed to replace electro-mechanical approaches. Moreover, optical coherence tomography (OCT) and deep learning have been used for estimating forces based on deformation observed in volumetric image data. The method demonstrated the advantage of deep learning with 3D volumetric data over 2D depth images for force estimation. In this work, we extend the problem of deep learning-based force estimation to 4D spatio-temporal data with streams of 3D OCT volumes. For this purpose, we design and evaluate several methods extending spatio-temporal deep learning to 4D which is largely unexplored so far. Furthermore, we provide an in-depth analysis of multi-dimensional image data representations for force estimation, comparing our 4D approach to previous, lower-dimensional methods. Also, we analyze the effect of temporal information and we study the prediction of short-term future force values, which could facilitate safety features. For our 4D force estimation architectures, we find that efficient decoupling of spatial and temporal processing is advantageous. We show that using 4D spatio-temporal data outperforms all previously used data representations with a mean absolute error of 10.7 mN. We find that temporal information is valuable for force estimation and we demonstrate the feasibility of force prediction
            BibTeX:
            @article{Gessert2020e,
              author = {N. Gessert and M. Bengs and M. Schlüter and A. Schlaefer},
              title = {Deep learning with 4D spatio-temporal data representations for OCT-based force estimation},
              booktitle = {Medical Image Analysis},
              journal = {Medical Image Analysis},
              year = {2020},
              volume = {64},
              number = {101730},
              url = {https://doi.org/10.1016/j.media.2020.101730},
              doi = {10.1016/j.media.2020.101730}
            }
            
          • N. Gessert, J. Krüger, R. Opfer, A.-C. Ostwaldt, P. Manogaran, H. H. Kitzler, S. Schippling, A. Schlaefer (2020), "Multiple Sclerosis Lesion Activity Segmentation with Attention-Guided Two-Path CNNs", Computerized Medical Imaging and Graphics., In Computerized Medical Imaging and Graphics., Sept, 2020. Vol. 84(101772) [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: Multiple sclerosis is an inflammatory autoimmune demyelinating disease that is characterized by lesions in the central nervous system. Typically, magnetic resonance imaging (MRI) is used for tracking disease progression. Automatic image processing methods can be used to segment lesions and derive quantitative lesion parameters. So far, methods have focused on lesion segmentation for individual MRI scans. However, for monitoring disease progression, lesion activity in terms of new and enlarging lesions between two time points is a crucial biomarker. For this problem, several classic methods have been proposed, e.g., using difference volumes. Despite their success for single-volume lesion segmentation, deep learning approaches are still rare for lesion activity segmentation. In this work, convolutional neural networks (CNNs) are studied for lesion activity segmentation from two time points. For this task, CNNs are designed and evaluated that combine the information from two points in different ways. In particular, two-path architectures with attention-guided interactions are proposed that enable effective information exchange between the two time pointś processing paths. It is demonstrated that deep learning-based methods outperform classic approaches and it is shown that attention-guided interactions significantly improve performance. Furthermore, the attention modules produce plausible attention maps that have a masking effect that suppresses old, irrelevant lesions. A lesion-wise false positive rate of 26.4% is achieved at a true positive rate of 74.2%, which is not significantly different from the interrater performance
            BibTeX:
            @article{Gessert2020f,
              author = {N. Gessert and J. Krüger and R. Opfer and A.-C. Ostwaldt and P. Manogaran and H. H. Kitzler and S. Schippling and A. Schlaefer},
              title = {Multiple Sclerosis Lesion Activity Segmentation with Attention-Guided Two-Path CNNs},
              booktitle = {Computerized Medical Imaging and Graphics},
              journal = {Computerized Medical Imaging and Graphics},
              year = {2020},
              volume = {84},
              number = {101772},
              url = {https://arxiv.org/abs/2008.02001},
              doi = {10.1016/j.compmedimag.2020.101772}
            }
            
          • N. Gessert, M. Nielsen, M. Shaikh, R. Werner, A. Schlaefer (2020), "Skin lesion classification using ensembles of multi-resolution EfficientNets with meta data", MethodsX. Vol. 7,100864. [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: In this paper, we describe our method for the ISIC 2019 Skin Lesion Classification Challenge. The challenge comes with two tasks. For task 1, skin lesions have to be classified based on dermoscopic images. For task 2, dermoscopic images and additional patient meta data are used. Our deep learning-based method achieved first place for both tasks. The are several problems we address with our method. First, there is an unknown class in the test set which we cover with a data-driven approach. Second, there is a severe class imbalance that we address with loss balancing. Third, there are images with different resolutions which motivates two different cropping strategies and multi-crop evaluation. Last, there is patient meta data available which we incorporate with a dense neural network branch. • We address skin lesion classification with an ensemble of deep learning models including EfficientNets, SENet, and ResNeXt WSL, selected by a search strategy. • We rely on multiple model input resolutions and employ two cropping strategies for training. We counter severe class imbalance with a loss balancing approach. • We predict an additional, unknown class with a data-driven approach and we make use of patient meta data with an additional input branch.
            BibTeX:
            @article{Gessert2020d,
              author = {N. Gessert and M. Nielsen and M. Shaikh and R. Werner and A. Schlaefer},
              title = {Skin lesion classification using ensembles of multi-resolution EfficientNets with meta data},
              journal = {MethodsX},
              year = {2020},
              volume = {7},
              pages = {100864},
              url = {http://www.sciencedirect.com/science/article/pii/S2215016120300832},
              doi = {10.1016/j.mex.2020.100864}
            }
            
          • N. Gessert A. Schlaefer (2020), "Left Ventricle Quantification Using Direct Regression with Segmentation Regularization and Ensembles of Pretrained 2D and 3D CNNs", arXiv e-prints., Aug, 2020. ,Statistical Atlases and Computational Models of the Heart. Multi-Sequence CMR Segmentation, CRT-EPiggy and LV Full Quantification Challenges. STACOM@MICCAI 2019. Lecture Notes in Computer Science. 375-383. [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: Cardiac left ventricle (LV) quantification provides a tool for diagnosing cardiac diseases. Automatic calculation of all relevant LV indices from cardiac MR images is an intricate task due to large variations among patients and deformation during the cardiac cycle. Typical methods are based on segmentation of the myocardium or direct regression from MR images. To consider cardiac motion and deformation, recurrent neural networks and spatio-temporal convolutional neural networks (CNNs) have been proposed. We study an approach combining stateof-the-art models and emphasizing transfer learning to account for the small dataset provided for the LVQuan19 challenge. We compare 2D spatial and 3D spatio-temporal CNNs for LV indices regression and cardiac phase classification. To incorporate segmentation information, we propose an architecture-independent segmentation-based regularization. To improve the robustness further, we employ a search scheme that identifies the optimal ensemble from a set of architecture variants. Evaluating on the LVQuan19 Challenge training dataset with 5-fold cross-validation, we achieve mean absolute errors of 111 ± 76 mm2, 1:84 ± 0:90 mm and 1:22 ± 0:60 mm for area, dimension and regional wall thickness regression, respectively. The error rate for cardiac phase classification is 6:7 %
            BibTeX:
            @article{Gessert2020,
              author = {N. Gessert and A. Schlaefer},
              title = {Left Ventricle Quantification Using Direct Regression with Segmentation Regularization and Ensembles of Pretrained 2D and 3D CNNs},
              journal = {arXiv e-prints},
              year = {2020},
              pages = {Statistical Atlases and Computational Models of the Heart. Multi-Sequence CMR Segmentation, CRT-EPiggy and LV Full Quantification Challenges. STACOM@MICCAI 2019. Lecture Notes in Computer Science. 375-383},
              url = {https://arxiv.org/abs/1908.04181},
              doi = {10.1007/978-3-030-39074-7_39}
            }
            
          • N. Gessert, T. Sentker, F. Madesta, R. Schmitz, H. Kniep, I. Baltruschat, R. Werner, A. Schlaefer (2020), "Skin Lesion Classification Using CNNs With Patch-Based Attention and Diagnosis-Guided Loss Weighting", IEEE Transactions on Biomedical Engineering., Feb, 2020. Vol. 67(2),495-503. [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: Objective: This paper addresses two key problems of skin lesion classification. The first problem is the effective use of high-resolution images with pretrained standard architectures for image classification. The second problem is the high-class imbalance encountered in real-world multi-class datasets. Methods: To use high-resolution images, we propose a novel patch-based attention architecture that provides global context between small, high-resolution patches. We modify three pretrained architectures and study the performance of patch-based attention. To counter class imbalance problems, we compare oversampling, balanced batch sampling, and class-specific loss weighting. Additionally, we propose a novel diagnosis-guided loss weighting method that takes the method used for ground-truth annotation into account. Results: Our patch-based attention mechanism outperforms previous methods and improves the mean sensitivity by 7% . Class balancing significantly improves the mean sensitivity and we show that our diagnosis-guided loss weighting method improves the mean sensitivity by 3% over normal loss balancing. Conclusion: The novel patch-based attention mechanism can be integrated into pretrained architectures and provides global context between local patches while outperforming other patch-based methods. Hence, pretrained architectures can be readily used with high-resolution images without downsampling. The new diagnosis-guided loss weighting method outperforms other methods and allows for effective training when facing class imbalance. Significance: The proposed methods improve automatic skin lesion classification. They can be extended to other clinical applications where high-resolution image data and class imbalance are relevant
            BibTeX:
            @article{Gessert2020a,
              author = {N. Gessert and T. Sentker and F. Madesta and R. Schmitz and H. Kniep and I. Baltruschat and R. Werner and A. Schlaefer},
              title = {Skin Lesion Classification Using CNNs With Patch-Based Attention and Diagnosis-Guided Loss Weighting},
              journal = {IEEE Transactions on Biomedical Engineering},
              year = {2020},
              volume = {67},
              number = {2},
              pages = {495-503},
              url = {https://arxiv.org/abs/1905.02793},
              doi = {10.1109/TBME.2019.2915839}
            }
            
          • F. Griese, S. Latus, M. Schlüter, M. Graeser, M. Lutz, A. Schlaefer, T. Knopp (2020), "In-Vitro MPI-guided IVOCT catheter tracking in real time for motion artifact compensation", PLOS ONE., 03, 2020. Vol. 15(3),e0230821. Public Library of Science. [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: Purpose Using 4D magnetic particle imaging (MPI), intravascular optical coherence tomography (IVOCT) catheters are tracked in real time in order to compensate for image artifacts related to relative motion. Our approach demonstrates the feasibility for bimodal IVOCT and MPI in-vitro experiments. Material and methods During IVOCT imaging of a stenosis phantom the catheter is tracked using MPI. A 4D trajectory of the catheter tip is determined from the MPI data using center of mass sub-voxel strategies. A custom built IVOCT imaging adapter is used to perform different catheter motion profiles: no motion artifacts, motion artifacts due to catheter bending, and heart beat motion artifacts. Two IVOCT volume reconstruction methods are compared qualitatively and quantitatively using the DICE metric and the known stenosis length. Results The MPI-tracked trajectory of the IVOCT catheter is validated in multiple repeated measurements calculating the absolute mean error and standard deviation. Both volume reconstruction methods are compared and analyzed whether they are capable of compensating the motion artifacts. The novel approach of MPI-guided catheter tracking corrects motion artifacts leading to a DICE coefficient with a minimum of 86% in comparison to 58% for a standard reconstruction approach. Conclusions IVOCT catheter tracking with MPI in real time is an auspicious method for radiation free MPI-guided IVOCT interventions. The combination of MPI and IVOCT can help to reduce motion artifacts due to catheter bending and heart beat for optimized IVOCT volume reconstructions.
            BibTeX:
            @article{Griese2020,
              author = {F. Griese AND S. Latus AND M. Schlüter AND M. Graeser AND M. Lutz AND A. Schlaefer AND T. Knopp},
              title = {In-Vitro MPI-guided IVOCT catheter tracking in real time for motion artifact compensation},
              journal = {PLOS ONE},
              publisher = {Public Library of Science},
              year = {2020},
              volume = {15},
              number = {3},
              pages = {e0230821},
              url = {https://arxiv.org/abs/1911.12226},
              doi = {10.1371/journal.pone.0230821}
            }
            
          • M. Gromniak, N. Gessert, T. Saathoff, A. Schlaefer (2020), "Needle tip force estimation by deep learning from raw spectral OCT data", International Journal of Computer Assisted Radiology and Surgery., In International Journal of Computer Assisted Radiology and Surgery. Vol. 15,1699-1702. [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: Purpose Needle placement is a challenging problem for applications such as biopsy or brachytherapy. Tip force sensing can provide valuable feedback for needle navigation inside the tissue. For this purpose, fiber-optical sensors can be directly integrated into the needle tip. Optical coherence tomography (OCT) can be used to image tissue. Here, we study how to calibrate OCT to sense forces, e.g., during robotic needle placement. Methods We investigate whether using raw spectral OCT data without a typical image reconstruction can improve a deep learning-based calibration between optical signal and forces. For this purpose, we consider three different needles with a new, more robust design which are calibrated using convolutional neural networks (CNNs). We compare training the CNNs with the raw OCT signal and the reconstructed depth profiles. Results We find that using raw data as an input for the largest CNN model outperforms the use of reconstructed data with a mean absolute error of 5.81 mN compared to 8.04 mN. Conclusions We find that deep learning with raw spectral OCT data can improve learning for the task of force estimation. Our needle design and calibration approach constitute a very accurate fiber-optical sensor for measuring forces at the needle tip
            BibTeX:
            @article{Gromniak2020,
              author = {M. Gromniak and N. Gessert and T. Saathoff and A. Schlaefer},
              title = {Needle tip force estimation by deep learning from raw spectral OCT data},
              booktitle = {International Journal of Computer Assisted Radiology and Surgery},
              journal = {International Journal of Computer Assisted Radiology and Surgery},
              year = {2020},
              volume = {15},
              pages = {1699-1702},
              url = {https://link.springer.com/article/10.1007%2Fs11548-020-02224-w},
              doi = {10.1007/s11548-020-02224-w}
            }
            
          • M. Gromniak, M. Neidhardt, A. Heinemann, K. Püschel, A. Schlaefer (2020), "Needle placement accuracy in CT-guided robotic post mortem biopsy", Current Directions in Biomedical Engineering. Berlin, Boston, May, 2020. Vol. 6(1),20200031. De Gruyter. [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: Forensic autopsies include a thorough examination of the corpse to detect the source or alleged manner of death as well as to estimate the time since death. However, a full autopsy may be not feasible due to limited time, cost or ethical objections by relatives. Hence, we propose an automated minimal invasive needle biopsy system with a robotic arm, which does not require any online calibrations during a procedure. The proposed system can be easily integrated into the workflow of a forensic biopsy since the robot can be flexibly positioned relative to the corpse. With our proposed system, we performed needle insertions into wax phantoms and livers of two corpses and achieved an accuracy of 4.34 ± 1.27 mm and 10.81 ± 4.44 mm respectively
            BibTeX:
            @article{Gromniak2020a,
              author = {M. Gromniak and M. Neidhardt and A. Heinemann and K. Püschel and A. Schlaefer},
              title = {Needle placement accuracy in CT-guided robotic post mortem biopsy},
              journal = {Current Directions in Biomedical Engineering},
              publisher = {De Gruyter},
              year = {2020},
              volume = {6},
              number = {1},
              pages = {20200031},
              url = {https://www.degruyter.com/view/journals/cdbme/6/1/article-20200031.xml},
              doi = {10.1515/cdbme-2020-0031}
            }
            
          • J. Krüger, R. Opfer, N. Gessert, A.-C. Ostwaldt, P. Manogaran, H. H. Kitzler, A. Schlaefer, S. Schippling (2020), "Fully automated longitudinal segmentation of new or enlarged multiple sclerosis lesions using 3D convolutional neural networks", NeuroImage: Clinical. Vol. 28,102445. [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: The quantification of new or enlarged lesions from follow-up MRI scans is an important surrogate of clinical disease activity in patients with multiple sclerosis (MS). Not only is manual segmentation time consuming, but inter-rater variability is high. Currently, only a few fully automated methods are available. We address this gap in the field by employing a 3D convolutional neural network (CNN) with encoder-decoder architecture for fully automatic longitudinal lesion segmentation. Input data consist of two fluid attenuated inversion recovery (FLAIR) images (baseline and follow-up) per patient. Each image is entered into the encoder and the feature maps are concatenated and then fed into the decoder. The output is a 3D mask indicating new or enlarged lesions (compared to the baseline scan). The proposed method was trained on 1809 single point and 1444 longitudinal patient data sets and then validated on 185 independent longitudinal data sets from two different scanners. From the two validation data sets, manual segmentations were available from three experienced raters, respectively. The performance of the proposed method was compared to the open source Lesion Segmentation Toolbox (LST), which is a current state-of-art longitudinal lesion segmentation method. The mean lesion-wise inter-rater sensitivity was 62%, while the mean inter-rater number of false positive (FP) findings was 0.41 lesions per case. The two validated algorithms showed a mean sensitivity of 60% (CNN), 46% (LST) and a mean FP of 0.48 (CNN), 1.86 (LST) per case. Sensitivity and number of FP were not significantly different (p < 0.05) between the CNN and manual raters. New or enlarged lesions counted by the CNN algorithm appeared to be comparable with manual expert ratings. The proposed algorithm seems to outperform currently available approaches, particularly LST. The high inter-rater variability in case of manual segmentation indicates the complexity of identifying new or enlarged lesions. An automated CNN-based approach can quickly provide an independent and deterministic assessment of new or enlarged lesions from baseline to follow-up scans with acceptable reliability
            BibTeX:
            @article{Krueger2020,
              author = {J. Krüger and R. Opfer and N. Gessert and A.-C. Ostwaldt and P. Manogaran and H. H. Kitzler and A. Schlaefer and S. Schippling},
              title = {Fully automated longitudinal segmentation of new or enlarged multiple sclerosis lesions using 3D convolutional neural networks},
              journal = {NeuroImage: Clinical},
              year = {2020},
              volume = {28},
              pages = {102445},
              url = {http://www.sciencedirect.com/science/article/pii/S2213158220302825},
              doi = {10.1016/j.nicl.2020.102445}
            }
            
          • S. Latus, P. Breitfeld, M. Neidhardt, W. Reip, C. Zöllner, A. Schlaefer (2020), "Boundary prediction during epidural punctures based on OCT relative motion analysis", EUR J ANAESTH., 6, 2020. Vol. 2020(Volume 37 | e-Supplement 58 | June 2020) Lippincott Williams and Wilkins. [Abstract] [BibTeX] [URL]
          • Abstract: 1 Background and Goal of StudyUntil today, physicians mainly use their haptic impression for the correct po-sitioning of the needle during epidural punctures. ?Blind? techniques such asLoss-of-Resistance (LOR) and saline drop methods [1, 2] help to identify theepidural space, but the error rate is highly dependent on the physician?s exper-tise. The challenge of a direct needle insertion through different tissues into theepidural space with reliable identification of the same and without bone contactrequires a lot of experience from the performer. A penetration into the spinalcolumn space has to be absolutely avoided [3]. Different imaging modalities suchas ultrasound (US) [4] are used to assist needle navigation task in a few anesthe-sia cases. However, an external tracking of the needle path is limited due to theanatomy around the spinal cord. Hence, optical fibers are integrated in epiduralneedles to enable high-resolution optical coherence tomography (OCT) imagingof the punctured tissue [5]. Deep learning approaches based on morphologicalinformation of OCT intensity data facilitate an online identification of tissuestructures along the needle trajectory [6]. Furthermore, the fiber bragg gratings(FBG) are integrated to measure the forces during punctures [7]. In this study,we propose a novel method to determine both the morphological and mechanicproperties of tissues during epidural punctures. In addition to the intensity theOCT phase data is used to differentiate between tissue structures and identifyruptures and deformations in front of the needle tip in order to detect relevanttissue structures and boundaries.2 Material and MethodsWe perform ex-vivo epidural punctures in a pig cadaver model using an adaptedepidural Tuohy-needle with an integrated forward facing OCT fiber (Fig. 1).During manual punctures we capture ground-truth information of the needlepose and interacting forces at the needle shaft using an optical tracking system(fusion track 500, Atracsys) and a force-torque (FT) sensor (SRI, Sunrise In-struments), respectively. In addition, the physician feeds his haptic impression2S. Latus and P. Breitfeld(e.g. harder or softer resistance in tissues and the sensing of slight click by pen-etrating the ligamentum flavum) and expectation of the boundary interactionsback. We allocate one dimensional OCT depth scans (A-scans) throughout theFig. 1.Setup for OCT deformation analysis during ex-vivo epidural punctures. Theepidural needle is attached to a force torque (FT) sensor with associated trackingmarker. An optical fiber is integrated in the Tuohy-needle enabling a forward facingA-scan acquisitions (bottom right, red dashed line). A physician navigates the OCTneedle towards the epidural space, meanwhile OCT, FT, and tracking data is capturedsynchronously. During epidural puncture the needle needs to be navigated through skin(orange), supraspinous ligament, fat/muscle tissue, and ligamentum flavum (brown)preventing a rupture of the Dura (blue) or a collision with bone structures (gray).punctures applying a spectral domain OCT system (Telesto, Thorlabs) with aconstant A-scan frequency of 91 kHz.Using both the acquired OCT intensity and phase data, we are able to mag-nify the tissue properties in front of the needle towards the haptic feedbacksensed by the FT sensor and physician. We analyze the tissue interactions infront of the needle by means of the relative motion derived from the OCT phasedifferences [8]. An increase of the determined relative motion can be relatedto deformations and following ruptures at tissue boundaries, whereas negativevalues are associated to negative needle motion w.r.t. the tissue. We use thehaptic impression of the physician as ground-truth information for the detectedboundary interactions.Title Suppressed Due to Excessive Length33 Results and DiscussionExemplary, OCT intensity data is shown with overlayed relative motion (greenand red) and measured force in needle direction (blue line) for one epiduralpuncture (Fig. 2). In case of boundary deformations and following ruptures therelative motion increases rapidly (time points B, C, and E). Especially, duringthe puncture of the ligamentum flavum (E to F) multiple ruptures are detecteduntil a LOR is measured with the FT sensor. Between C and D several smallruptures appear due to sinews and muscle structures. In contrast, the highlightedtime points without tissue boundary ruptures (A and D) do not show an increaseof the estimated relative motion. After the bone contact (D) the needle is pulledbackwards and an obvious negative relative motion (red) follows.Fig. 2.OCT intensity and relative motion estimations related to the externally mea-sured forces for an exemplary epidural puncture. The relative motion is depicted ingreen and red for positive and negative values, respectively. The time points (A-F)are related to different needle-tissue interactions: A) Re-orientation of needle, B) firstrupture at skin, C) second rupture at supraspinous ligament, D) bone contact and fol-lowing needle re-orientation, E) start of ruptures at ligamentum flavum, and F) LORafter ligamentum flavum.4 ConclusionWe propose a forward facing OCT needle design to enable the evaluation of both,OCT speckle due to different tissue structures and the relative motion in orderto determine relevant deformations and ruptures at tissue boundaries. Hence,we are able to sense and thereby magnify the tissue mechanics during and priorboundary punctures without additional sensors such as FBGs.
            BibTeX:
            @article{Latus2020,
              author = {S. Latus and P. Breitfeld and M. Neidhardt and W. Reip and C. Zöllner and A. Schlaefer},
              title = {Boundary prediction during epidural punctures based on OCT relative motion analysis},
              journal = {EUR J ANAESTH},
              publisher = {Lippincott Williams and Wilkins},
              year = {2020},
              volume = {2020},
              number = {Volume 37 | e-Supplement 58 | June 2020},
              url = {https://fis-uke.de/portal/de/publications/boundary-prediction-during-epidural-punctures-based-on-oct-relative-motion-analysis(7bdce9d3-5347-44a7-a5d7-ae7991d5f9b7).html}
            }
            
          • R. Mieling, S. Latus, N. Gessert, M. Lutz, A. Schlaefer (2020), "Deep learning-based rotation frequency estimation and NURD correction for IVOCT image data", (Suppl1) International Journal of CARS'2020., In (Suppl1) International Journal of CARS'2020., June, 2020. Vol. 15(1),162-163. [Abstract] [BibTeX] [DOI]
          • Abstract: Atherosclerotic plaque in coronary arteries can lead to myocardial infarction and is one of the leading causes of death. Intravascular optical coherence tomography (IVOCT) can be used to image the affected blood vessels for assessment and treatment. However, catheter bending often causes changes in the rotation frequency of the optical probe during acquisition. The resulting non-uniform rotation distortion (NURD) artefacts complicate the image interpretation and may affect the diagnosis. Deep learning methods have been proposed to analyze IVOCT image data, including plaque detection [1] and feature extraction [2]. We present a novel approach to directly estimate the rotation frequency of the optical probe from a sequence of IVOCT images. We illustrate that this allows a proper correction of NURD artefacts
            BibTeX:
            @article{Mieling2020,
              author = {R. Mieling and S. Latus and N. Gessert and M. Lutz and A. Schlaefer},
              title = {Deep learning-based rotation frequency estimation and NURD correction for IVOCT image data},
              booktitle = {(Suppl1) International Journal of CARS'2020},
              journal = {(Suppl1) International Journal of CARS'2020},
              year = {2020},
              volume = {15},
              number = {1},
              pages = {162-163},
              doi = {10.1007/s11548-020-02171-6}
            }
            
          • M. Neidhardt, M. Bengs, S. Latus, M. Schlüter, T. Saathoff, A. Schlaefer (2020), "4D Deep learning for real-time volumetric optical coherence elastography", International Journal of Computer Assisted Radiology and Surgery., In International Journal of Computer Assisted Radiology and Surgery 2020. ,1861-6429. [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: Elasticity of soft tissue provides valuable information to physicians during treatment and diagnosis of diseases. A number of approaches have been proposed to estimate tissue stiffness from the shear wave velocity. Optical coherence elastography offers a particularly high spatial and temporal resolution. However, current approaches typically acquire data at different positions sequentially, making it slow and less practical for clinical application
            BibTeX:
            @inproceedings{Neidhardt2020a,
              author = {M. Neidhardt and M. Bengs and S. Latus and M. Schlüter and T. Saathoff and A. Schlaefer},
              title = {4D Deep learning for real-time volumetric optical coherence elastography},
              booktitle = {International Journal of Computer Assisted Radiology and Surgery 2020},
              journal = {International Journal of Computer Assisted Radiology and Surgery},
              year = {2020},
              pages = {1861-6429},
              url = {https://doi.org/10.1007/s11548-020-02261-5},
              doi = {10.1007/s11548-020-02261-5}
            }
            
          • M. Neidhardt, M. Bengs, S. Latus, M. Schlüter, T. Saathoff, A. Schlaefer (2020), "Deep Learning for High Speed Optical Coherence Elastography", In IEEE International Symposium on Biomedical Imaging. ,1583-1586. [Abstract] [BibTeX] [DOI]
          • Abstract: Mechanical properties of tissue provide valuable information for identifying lesions. One approach to obtain quantitative estimates of elastic properties is shear wave elastography with optical coherence elastography (OCE). However, given the shear wave velocity, it is still difficult to estimate elastic properties. Hence, we propose deep learning to directly predict elastic tissue properties from OCE data. We acquire 2D images with a frame rate of 30 kHz and use convolutional neural networks to predict gelatin concentration, which we use as a surrogate for tissue elasticity. We compare our deep learning approach to predictions from conventional regression models, using the shear wave velocity as a feature. Mean absolut prediction errors for the conventional approaches range from 1.32±0.98 p.p. to 1.57±1.30 p.p. whereas we report an error of 0.90±0.84 p.p. for the convolutional neural network with 3D spatio-temporal input. Our results indicate that deep learning on spatio-temporal data outperforms elastography based on explicit shear wave velocity estimation.
            BibTeX:
            @inproceedings{Neidhardt2020,
              author = {M. Neidhardt and M. Bengs and S. Latus and M. Schlüter and T. Saathoff and A. Schlaefer},
              title = {Deep Learning for High Speed Optical Coherence Elastography},
              booktitle = {IEEE International Symposium on Biomedical Imaging},
              year = {2020},
              pages = {1583-1586},
              doi = {10.1109/ISBI45749.2020.9098422}
            }
            
          • M. Neidhardt, N. Gessert, T. Gosau, J. Kemmling, S. Feldhaus, U. Schumacher, A. Schlaefer (2020), "Force estimation from 4D OCT data in a human tumor xenograft mouse model", Current Directions in Biomedical Engineering. Berlin, Boston, May, 2020. Vol. 6(1),20200022. De Gruyter. [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: Minimally invasive robotic surgery offer benefits such as reduced physical trauma, faster recovery and lesser pain for the patient. For these procedures, visual and haptic feedback to the surgeon is crucial when operating surgical tools without line-of-sight with a robot. External force sensors are biased by friction at the tool shaft and thereby cannot estimate forces between tool tip and tissue. As an alternative, vision-based force estimation was proposed. Here, interaction forces are directly learned from deformation observed by an external imaging system. Recently, an approach based on optical coherence tomography and deep learning has shown promising results. However, most experiments are performed on ex-vivo tissue. In this work, we demonstrate that models trained on dead tissue do not perform well in in vivo data. We performed multiple experiments on a human tumor xenograft mouse model, both on in vivo, perfused tissue and dead tissue. We compared two deep learning models in different training scenarios. Training on perfused, in vivo data improved model performance by 24% for in vivo force estimation
            BibTeX:
            @article{Neidhardt2020b,
              author = {M. Neidhardt and N. Gessert and T. Gosau and J. Kemmling and S. Feldhaus and U. Schumacher and A. Schlaefer},
              title = {Force estimation from 4D OCT data in a human tumor xenograft mouse model},
              journal = {Current Directions in Biomedical Engineering},
              publisher = {De Gruyter},
              year = {2020},
              volume = {6},
              number = {1},
              pages = {20200022},
              url = {https://www.degruyter.com/view/journals/cdbme/6/1/article-20200022.xml},
              doi = {10.1515/cdbme-2020-0022}
            }
            
          • A. Rogalla, T. Kamph, U. Bulmann, K. Billerbeck, M. Blumreiter, S. Schupp (2020), "Designing And Analyzing Open Application-Oriented Labs in Software-Verification Education", Proceedings of the 48th Annual SEFI Conference., In Annual Conference of European Society for Engineering Education (SEFI). Enschede (the Netherlands)., September, 2020. Vol. 48,421-430. [Abstract] [BibTeX]
          • Abstract: The daily work of a software engineer frequently includes the design and implementation of systems in non-software-engineering disciplines, like medical technology, often in interdisciplinary teams. In order to successfully select and apply the appropriate theoretical concept to perform the task, it is necessary to understand the actual problem, possibly outside one’s personal subject area, and to find an appropriate abstraction. However, software-engineering education often focuses on the theoretical concepts alone, ignoring the necessary skills to solve interdisciplinary tasks. We argue that the use of open, application-oriented labs creates a synergetic effect in understanding of theoretical concepts and the ability to apply them to solve practical issues. The subject of this study is a lab in the master’s program module “Software Verification” at a German university of technology. Therein, student groups solve openly-formulated, application-oriented modeling tasks in the field of medical technology. In this paper, we present the design and an analysis of this lab by means of a student questionnaire after completion of the lab and a document analysis of 32 laboratory reports. Our survey results show that more than 90 % of the respondents state that the practical labs helped them to understand the theoretical content of the lectures. The evaluation of the lab reports shows that around half of the student groups were able to understand, abstract, and model the task correctly. We conclude that the inclusion of open, application-oriented labs in software-engineering education is beneficial to both, understanding of theoretical concepts and ability to solve interdisciplinary tasks
            BibTeX:
            @inproceedings{Rogalla2020a,
              author = {A. Rogalla and T. Kamph and U. Bulmann and K. Billerbeck and M. Blumreiter and S. Schupp},
              title = {Designing And Analyzing Open Application-Oriented Labs in Software-Verification Education},
              booktitle = {Annual Conference of European Society for Engineering Education (SEFI). Enschede (the Netherlands)},
              journal = {Proceedings of the 48th Annual SEFI Conference},
              year = {2020},
              volume = {48},
              pages = {421-430}
            }
            
          • A. Rogalla, S. Lehmann, M. Neidhardt, J. Sprenger, M. Bengs, A. Schlaefer, S. Schupp (2020), "Synthesizing Strategies for Needle Steering in Gelatin Phantoms", MARS 2020., In Models for Formal Analysis of Real Systems (MARS 2020). [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: In medicine, needles are frequently used to deliver treatments to subsurface targets or to take tissue samples from the inside of an organ. Current clinical practice is to insert needles under image guidance or haptic feedback, although that may involve reinsertions and adjustments since the needle and its interaction with the tissue during insertion cannot be completely controlled. (Automated) needle steering could in theory improve the accuracy with which a target is reached and thus reduce surgical traumata especially for minimally invasive procedures, e.g., brachytherapy or biopsy. Yet, flexible needles and needle-tissue interaction are both complex and expensive to model and can often be computed approximatively only. In this paper we propose to employ timed games to navigate flexible needles with a bevel tip to reach a fixed target in tissue. We use a simple non-holonomic model of needle-tissue interaction, which abstracts in particular from the various physical forces involved and appears to be simplistic compared to related models from medical robotics. Based on the model, we synthesize strategies from which we can derive sufficiently precise motion plans to steer the needle in soft tissue. However, applying those strategies in practice, one is faced with the problem of an unpredictable behavior of the needle at the initial insertion point. Our proposal is to implement a preprocessing step to initialize the model based on data from the real system, once the needle is inserted. Taking into account the actual needle tip angle and position, we generate strategies to reach the desired target. We have implemented the model in Uppaal Stratego and evaluated it on steering a flexible needle in gelatin phantoms; gelatin phantoms are commonly used in medical technology to simulate the behavior of soft tissue. The experiments show that strategies can be synthesized for both generated and measured needle motions with a maximum deviation of 1.84mm
            BibTeX:
            @inproceedings{Rogalla2020,
              author = {A. Rogalla and S. Lehmann and M. Neidhardt and J. Sprenger and M. Bengs and A. Schlaefer and S. Schupp},
              title = {Synthesizing Strategies for Needle Steering in Gelatin Phantoms},
              booktitle = {Models for Formal Analysis of Real Systems (MARS 2020)},
              journal = {MARS 2020},
              year = {2020},
              url = {http://hdl.handle.net/11420/6107},
              doi = {10.4204/EPTCS.316.10}
            }
            
          • M. Schlüter, L. Glandorf, M. Gromniak, T. Saathoff, A. Schlaefer (2020), "Concept for Markerless 6D Tracking Employing Volumetric Optical Coherence Tomography", Sensors. Vol. 20(9),2678. [Abstract] [BibTeX] [DOI]
          • Abstract: Optical tracking systems are widely used, for example, to navigate medical interventions. Typically, they require the presence of known geometrical structures, the placement of artificial markers, or a prominent texture on the target’s surface. In this work, we propose a 6D tracking approach employing volumetric optical coherence tomography (OCT) images. OCT has a micrometer-scale resolution and employs near-infrared light to penetrate few millimeters into, for example, tissue. Thereby, it provides sub-surface information which we use to track arbitrary targets, even with poorly structured surfaces, without requiring markers. Our proposed system can shift the OCT’s field-of-view in space and uses an adaptive correlation filter to estimate the motion at multiple locations on the target. This allows one to estimate the target’s position and orientation. We show that our approach is able to track translational motion with root-mean-squared errors below 0.25mm and in-plane rotations with errors below 0.3°. For out-of-plane rotations, our prototypical system can achieve errors around 0.6°
            BibTeX:
            @article{Schlueter2020,
              author = {M. Schlüter and L. Glandorf and M. Gromniak and T. Saathoff and A. Schlaefer},
              title = {Concept for Markerless 6D Tracking Employing Volumetric Optical Coherence Tomography},
              journal = {Sensors},
              year = {2020},
              volume = {20},
              number = {9},
              pages = {2678},
              doi = {10.3390/s20092678}
            }
            
          • M. Schlüter, L. Glandorf, J. Sprenger, M. Gromniak, M. Neidhardt, T. Saathoff, A. Schlaefer (2020), "High-Speed Markerless Tissue Motion Tracking Using Volumetric Optical Coherence Tomography Images", In IEEE International Symposium on Biomedical Imaging. ,1979-1982. [Abstract] [BibTeX] [DOI]
          • Abstract: Modern optical coherence tomography (OCT) devices provide volumetric images with micrometer-scale spatial resolution and a temporal resolution beyond video rate. In this work, we analyze an OCT-based prototypical tracking system which processes 831 volumes per second, estimates translational motion, and automatically adjusts the field-of-view, which has a size of few millimeters, to follow a sample even along larger distances. The adjustment is realized by two galvo mirrors and a motorized reference arm, such that no mechanical movement of the scanning setup is necessary. Without requiring a marker or any other knowledge about the sample, we demonstrate that reliable tracking of velocities up to 25 mm s -1 is possible with mean tracking errors in the order of 0.25 mm. Further, we report successful tracking of lateral velocities up to 70 mm s -1 with errors below 0.3 mm
            BibTeX:
            @inproceedings{Schlueter2020a,
              author = {M. Schlüter and L. Glandorf and J. Sprenger and M. Gromniak and M. Neidhardt and T. Saathoff and A. Schlaefer},
              title = {High-Speed Markerless Tissue Motion Tracking Using Volumetric Optical Coherence Tomography Images},
              booktitle = {IEEE International Symposium on Biomedical Imaging},
              year = {2020},
              pages = {1979-1982},
              doi = {10.1109/ISBI45749.2020.9098448}
            }
            
          • M. Seemann, L. Bargsten, A. Schlaefer (2020), "Data augmentation for computed tomography angiography via synthetic image generation and neural domain adaptation", Current Directions in Biomedical Engineering. Berlin, Boston Vol. 6(1),20200015. De Gruyter. [Abstract] [BibTeX] [DOI] [URL]
          • Abstract: Deep learning methods produce promising results when applied to a wide range of medical imaging tasks, including segmentation of artery lumen in computed tomography angiography (CTA) data. However, to perform sufficiently, neural networks have to be trained on large amounts of high quality annotated data. In the realm of medical imaging, annotations are not only quite scarce but also often not entirely reliable. To tackle both challenges, we developed a two-step approach for generating realistic synthetic CTA data for the purpose of data augmentation. In the first step moderately realistic images are generated in a purely numerical fashion. In the second step these images are improved by applying neural domain adaptation. We evaluated the impact of synthetic data on lumen segmentation via convolutional neural networks (CNNs) by comparing resulting performances. Improvements of up to 5% in terms of Dice coefficient and 20% for Hausdorff distance represent a proof of concept that the proposed augmentation procedure can be used to enhance deep learning-based segmentation for artery lumen in CTA images
            BibTeX:
            @article{Seemann2020,
              author = {M. Seemann and L. Bargsten and A. Schlaefer},
              title = {Data augmentation for computed tomography angiography via synthetic image generation and neural domain adaptation},
              journal = {Current Directions in Biomedical Engineering},
              publisher = {De Gruyter},
              year = {2020},
              volume = {6},
              number = {1},
              pages = {20200015},
              url = {https://www.degruyter.com/view/journals/cdbme/6/1/article-20200015.xml},
              doi = {10.1515/cdbme-2020-0015}
            }
            

            2019

            • L. Bargsten, M. Wendebourg, A. Schlaefer (2019), "Data Representations for Segmentation of Vascular Structures Using Convolutional Neural Networks with U-Net Architecture", In 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)., jul, 2019. ,989-992. IEEE. [Abstract] [BibTeX] [DOI] [URL]
            • Abstract: Convolutional neural networks (CNNs) produce promising results when applied to a wide range of medical imaging tasks including the segmentation of tissue structures. However, segmentation is particularly challenging when the target structures are small with respect to the complete image data and exhibit substantial curvature as in the case of coronary arteries in computed tomography angiography (CTA). Therefore, we evaluated the impact of data representation of tubular structures on the segmentation performance of CNNs with U-Net architecture in terms of the resulting Dice coefficients and Hausdorff distances. For this purpose, we considered 2D and 3D input data in cross-sectional and Cartesian representations. We found that the data representation can have a substantial impact on segmentation results with Dice coefficients ranging from 60% to 82% reaching values similar to those of human expert annotations used for training and Hausdorff distances ranging from 1.38 mm to 5.90 mm. Our results indicate that a 3D cross-sectional data representation is preferable for segmentation of thin tubular structures
              BibTeX:
              @inproceedings{Bargsten2019,
                author = {Lennart Bargsten and Mareike Wendebourg and Alexander Schlaefer},
                title = {Data Representations for Segmentation of Vascular Structures Using Convolutional Neural Networks with U-Net Architecture},
                booktitle = {2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
                publisher = {IEEE},
                year = {2019},
                pages = {989-992},
                url = {https://embs.papercept.net/conferences/conferences/EMBC19/program/EMBC19_ContentListWeb_2.html},
                doi = {10.1109/embc.2019.8857630}
              }
              
            • M. Bengs, N. Gessert, A. Schlaefer (2019), "4D Spatio-Temporal Deep Learning with 4D fMRI Data for Autism Spectrum Disorder Classification", In Proceedings of International Conference on Medical Imaging with Deep Learning. [Abstract] [BibTeX] [URL]
            • Abstract: Autism spectrum disorder (ASD) is associated with behavioral and communication problems. Often, functional magnetic resonance imaging (fMRI) is used to detect and characterize brain changes related to the disorder. Recently, machine learning methods have been employed to reveal new patterns by trying to classify ASD from spatio-temporal fMRI images. Typically, these methods have either focused on temporal or spatial information processing. Instead, we propose a 4D spatio-temporal deep learning approach for ASD classification where we jointly learn from spatial and temporal data. We employ 4D convolutional neural networks and convolutional-recurrent models which outperform a previous approach with an F1-score of 0.71 compared to an F1-score of 0.65
              BibTeX:
              @conference{Bengs2019,
                author = {Marcel Bengs and Nils Gessert and Alexander Schlaefer},
                title = {4D Spatio-Temporal Deep Learning with 4D fMRI Data for Autism Spectrum Disorder Classification},
                booktitle = {Proceedings of International Conference on Medical Imaging with Deep Learning},
                year = {2019},
                url = {https://openreview.net/forum?id=HklAUVnV5V}
              }
              
            • R. Buchert, J. Krüger, N. Gessert, W. Lehnert, I. Apostolova, S. Klutmann, A. Schlaefer (2019), "Deep Learning in der SPECT und PET des Gehirns", Der Nuklearmediziner., jun, 2019. Vol. 42(02),118-132. Georg Thieme Verlag KG. [Abstract] [BibTeX] [DOI]
            • Abstract: Deep learning has led to stunning achievements in many areas in recent years, including medical image processing. After a brief discussion of the basic principles of deep learning, some selected applications of deep learning in SPECT and PET of the brain will be presented.
              BibTeX:
              @article{Buchert2019,
                author = {Ralph Buchert and Julia Krüger and Nils Gessert and Wencke Lehnert and Ivayla Apostolova and Susanne Klutmann and Alexander Schlaefer},
                title = {Deep Learning in der SPECT und PET des Gehirns},
                journal = {Der Nuklearmediziner},
                publisher = {Georg Thieme Verlag KG},
                year = {2019},
                volume = {42},
                number = {02},
                pages = {118-132},
                doi = {10.1055/a-0838-8124}
              }
              
            • S. Gerlach, M. Schlüter, C. Fürweger, A. Schlaefer (2019), "Machbarkeit CNN-basierter Erzeugung von Kandidatenstrahlen für Radiochirurgie der Prostata", CURAC 2019 Conference Proceedings., In CURAC 2019 Tagungsband. Reutlingen, Sep, 2019. ,128-129. [Abstract] [BibTeX]
            • Abstract: Bei der Radiochirurgie wird ein Roboterarm verwendet, um Dosisabgabe aus nahezu beliebig vielen Richtungen zu ermöglichen. Allerdings ist wegen dieser Flexibilität die Behandlungsplanung eine anspruchsvolle Aufgabe. Üblicherweise wird eine Heuristik auf Grundlage randomisierter Kandidatenstrahlen verwendet, um die Anzahl der tatsächlich betrachteten Einstrahlrichtungen zu begrenzen. Im Gegensatz dazu schlagen wir die Verwendung eines Convolutional Neural Networks vor, um Kandidatenstrahlen auf Basis von Behandlungsplänen anderer Patienten zu generieren. Unsere Ergebnisse zeigen, dass dieser Ansatz nur halb so viele Kandidatenstrahlen für eine vergleichbare Planqualität benötigt.
              BibTeX:
              @inproceedings{Gerlach2019,
                author = {Stefan Gerlach and Matthias Schlüter and Christoph Fürweger and Alexander Schlaefer},
                title = {Machbarkeit CNN-basierter Erzeugung von Kandidatenstrahlen für Radiochirurgie der Prostata},
                booktitle = {CURAC 2019 Tagungsband},
                journal = {CURAC 2019 Conference Proceedings},
                year = {2019},
                pages = {128-129}
              }
              
            • N. Gessert (2019), "Skin Lesion Classification Using Loss Balancing, Ensembles of Multi-Resolution EfficientNets and Meta Data", In ISIC Skin Lesion Classification Challenge @ MICCAI 2019. [BibTeX] [URL]
            • BibTeX:
              @inproceedings{Gessert2019k,
                author = {Nils Gessert},
                title = {Skin Lesion Classification Using Loss Balancing, Ensembles of Multi-Resolution EfficientNets and Meta Data},
                booktitle = {ISIC Skin Lesion Classification Challenge @ MICCAI 2019},
                year = {2019},
                url = {https://challenge2019.isic-archive.com/leaderboard.html}
              }
              
            • N. Gessert, M. Bengs, L. Wittig, D. Drömann, T. Keck, A. Schlaefer, D. B. Ellebrecht (2019), "Deep transfer learning methods for colon cancer classification in confocal laser microscopy images", International Journal of Computer Assisted Radiology and Surgery., May, 2019. Vol. 14(11),1837-1845. Springer Science and Business Media LLC. [Abstract] [BibTeX] [DOI] [URL]
            • Abstract: The gold standard for colorectal cancer metastases detection in the peritoneum is histological evaluation of a removed tissue sample. For feedback during interventions, real-time in vivo imaging with confocal laser microscopy has been proposed for differentiation of benign and malignant tissue by manual expert evaluation. Automatic image classification could improve the surgical workflow further by providing immediate feedback.
              BibTeX:
              @article{Gessert2019h,
                author = {Nils Gessert and Marcel Bengs and Lukas Wittig and Daniel Drömann and Tobias Keck and Alexander Schlaefer and David B. Ellebrecht},
                title = {Deep transfer learning methods for colon cancer classification in confocal laser microscopy images},
                journal = {International Journal of Computer Assisted Radiology and Surgery},
                publisher = {Springer Science and Business Media LLC},
                year = {2019},
                volume = {14},
                number = {11},
                pages = {1837--1845},
                url = {https://doi.org/10.1007/s11548-019-02004-1},
                doi = {10.1007/s11548-019-02004-1}
              }
              
            • N. Gessert, M. Gromniak, M. Bengs, L. Matthäus, A. Schlaefer (2019), "Towards Deep Learning-Based EEG Electrode Detection Using Automatically Generated Labels", arXiv e-prints., In CURAC 2019 Tagungsband. Reutlingen, Aug, 2019. ,176-180. [Abstract] [BibTeX] [URL]
            • Abstract: Electroencephalography (EEG) allows for source measurement of electrical brain activity. Particularly for inverse localization, the electrode positions on the scalp need to be known. Often, systems such as optical digitizing scanners are used for accurate localization with a stylus. However, the approach is time-consuming as each electrode needs to be scanned manually and the scanning systems are expensive. We propose using an RGBD camera to directly track electrodes in the images using deep learning methods. Studying and evaluating deep learning methods requires large amounts of labeled data. To overcome the time-consuming data annotation, we generate a large number of ground-truth labels using a robotic setup. We demonstrate that deep learning-based electrode detection is feasible with a mean absolute error of 5:69 ± 6:10 mm and that our annotation scheme provides a useful environment for studying deep learning methods for electrode detection
              BibTeX:
              @inproceedings{Gessert2019j,
                author = {Nils Gessert and Martin Gromniak and Marcel Bengs and Lars Matthäus and Alexander Schlaefer},
                title = {Towards Deep Learning-Based EEG Electrode Detection Using Automatically Generated Labels},
                booktitle = {CURAC 2019 Tagungsband},
                journal = {arXiv e-prints},
                year = {2019},
                pages = {176-180},
                url = {https://arxiv.org/abs/1908.04186}
              }
              
            • N. Gessert, M. Gromniak, M. Schlüter, A. Schlaefer (2019), "Two-path 3D CNNs for calibration of system parameters for OCT-based motion compensation", CoRR. Vol. abs/1810.09582 [BibTeX] [URL]
            • BibTeX:
              @article{Gessert2019c,
                author = {Nils Gessert and Martin Gromniak and Matthias Schlüter and Alexander Schlaefer},
                title = {Two-path 3D CNNs for calibration of system parameters for OCT-based motion compensation},
                journal = {CoRR},
                year = {2019},
                volume = {abs/1810.09582},
                url = {http://arxiv.org/abs/1810.09582}
              }
              
            • N. Gessert, S. Latus, Y. S. Abdelwahed, D. M. Leistner, M. Lutz, A. Schlaefer (2019), "Bioresorbable Scaffold Visualization in IVOCT Images Using CNNs and Weakly Supervised Localization", CoRR. Vol. abs/1810.09578 [BibTeX] [URL]
            • BibTeX:
              @article{Gessert2019b,
                author = {Nils Gessert and Sarah Latus and Youssef S. Abdelwahed and David M. Leistner and Matthias Lutz and Alexander Schlaefer},
                title = {Bioresorbable Scaffold Visualization in IVOCT Images Using CNNs and Weakly Supervised Localization},
                journal = {CoRR},
                year = {2019},
                volume = {abs/1810.09578},
                url = {http://arxiv.org/abs/1810.09578}
              }
              
            • N. Gessert, M. Lutz, M. Heyder, S. Latus, D. M. Leistner, Y. S. Abdelwahed, A. Schlaefer (2019), "Automatic Plaque Detection in IVOCT Pullbacks Using Convolutional Neural Networks", IEEE Transactions on Medical Imaging., feb, 2019. Vol. 38(2),426-434. Institute of Electrical and Electronics Engineers (IEEE). [Abstract] [BibTeX] [DOI]
            • Abstract: Coronary heart disease is a common cause of death despite being preventable. To treat the underlying plaque deposits in the arterial walls, intravascular optical coherence tomography can be used by experts to detect and characterize the lesions. In clinical routine, hundreds of images are acquired for each patient which requires automatic plaque detection for fast and accurate decision support. So far, automatic approaches rely on classic machine learning methods and deep learning solutions have rarely been studied. Given the success of deep learning methods with other imaging modalities, a thorough understanding of deep learning-based plaque detection for future clinical decision support systems is required. We address this issue with a new dataset consisting of in-vivo patient images labeled by three trained experts. Using this dataset, we employ state-of-the-art deep learning models that directly learn plaque classification from the images. For improved performance, we study different transfer learning approaches. Furthermore, we investigate the use of cartesian and polar image representations and employ data augmentation techniques tailored to each representation. We fuse both representations in a multi-path architecture for more effective feature exploitation. Last, we address the challenge of plaque differentiation in addition to detection. Overall, we find that our combined model performs best with an accuracy of 91.7%, a sensitivity of 90.9% and a specificity of 92.4%. Our results indicate that building a deep learning-based clinical decision support system for plaque detection is feasible
              BibTeX:
              @article{Gessert2019a,
                author = {Nils Gessert and Matthias Lutz and Markus Heyder and Sarah Latus and David M. Leistner and Youssef S. Abdelwahed and Alexander Schlaefer},
                title = {Automatic Plaque Detection in IVOCT Pullbacks Using Convolutional Neural Networks},
                journal = {IEEE Transactions on Medical Imaging},
                publisher = {Institute of Electrical and Electronics Engineers (IEEE)},
                year = {2019},
                volume = {38},
                number = {2},
                pages = {426--434},
                doi = {10.1109/tmi.2018.2865659}
              }
              
            • N. Gessert, T. Priegnitz, T. Saathoff, S.-T. Antoni, D. Meyer, M. F. Hamann, K.-P. Jünemann, C. Otte, A. Schlaefer (2019), "Spatio-temporal deep learning models for tip force estimation during needle insertion", International Journal of Computer Assisted Radiology and Surgery., may, 2019. Vol. 14(9),1485-1493. Springer Science and Business Media LLC. [Abstract] [BibTeX] [DOI] [URL]
            • Abstract: Precise placement of needles is a challenge in a number of clinical applications such as brachytherapy or biopsy. Forces acting at the needle cause tissue deformation and needle deflection which in turn may lead to misplacement or injury. Hence, a number of approaches to estimate the forces at the needle have been proposed. Yet, integrating sensors into the needle tip is challenging and a careful calibration is required to obtain good force estimates.
              BibTeX:
              @article{Gessert2019,
                author = {Nils Gessert and Torben Priegnitz and Thore Saathoff and Sven-Thomas Antoni and David Meyer and Moritz Franz Hamann and Klaus-Peter Jünemann and Christoph Otte and Alexander Schlaefer},
                title = {Spatio-temporal deep learning models for tip force estimation during needle insertion},
                journal = {International Journal of Computer Assisted Radiology and Surgery},
                publisher = {Springer Science and Business Media LLC},
                year = {2019},
                volume = {14},
                number = {9},
                pages = {1485--1493},
                url = {https://doi.org/10.1007/s11548-019-02006-z},
                doi = {10.1007/s11548-019-02006-z}
              }
              
            • N. Gessert A. Schlaefer (2019), "Efficient Neural Architecture Search on Low-Dimensional Data for OCT Image Segmentation", In International Conference on Medical Imaging with Deep Learning. [Abstract] [BibTeX] [URL]
            • Abstract: Typically, deep learning architectures are handcrafted for their respective learning problem. As an alternative, neural architecture search (NAS) has been proposed where the architectureś structure is learned in an additional optimization step. For the medical imaging domain, this approach is very promising as there are diverse problems and imaging modalities that require architecture design. However, NAS is very time-consuming and medical learning problems often involve high-dimensional data with high computational requirements. We propose an efficient approach for NAS in the context of medical, image-based deep learning problems by searching for architectures on low-dimensional data which are subsequently transferred to high-dimensional data. For OCT-based layer segmentation, we demonstrate that a search on 1D data reduces search time by 87.5% compared to a search on 2D data while the final 2D models achieve similar performance
              BibTeX:
              @conference{Gessert2019f,
                author = {Nils Gessert and Alexander Schlaefer},
                title = {Efficient Neural Architecture Search on Low-Dimensional Data for OCT Image Segmentation},
                booktitle = {International Conference on Medical Imaging with Deep Learning},
                year = {2019},
                url = {https://openreview.net/forum?id=Syg3FDjntN}
              }
              
            • N. Gessert, M. Schlüter, S. Latus, V. Volgger, C. Betz, A. Schlaefer (2019), "Towards Automatic Lesion Classification in the Upper Aerodigestive Tract Using OCT and Deep Transfer Learning Methods", arXiv preprint arXiv:1902.03618., In International Congress and Exhibition on Computer Assisted Radiology and Surgery. [Abstract] [BibTeX] [URL]
            • Abstract: Early detection of cancer is crucial for treatment and overall patient survival. In the upper aerodigestive tract (UADT) the gold standard for identification of malignant tissue is an invasive biopsy. Recently, non-invasive imaging techniques such as confocal laser microscopy and optical coherence tomography (OCT) have been used for tissue assessment. In particular, in a recent study experts classified lesions in the UADT with respect to their invasiveness using OCT images only. As the results were promising, automatic classification of lesions might be feasible which could assist experts in their decision making. Therefore, we address the problem of automatic lesion classification from OCT images. This task is very challenging as the available dataset is extremely small and the data quality is limited. However, as similar issues are typical in many clinical scenarios we study to what extent deep learning approaches can still be trained and used for decision support.
              BibTeX:
              @inproceedings{Gessert2019e,
                author = {Nils Gessert and Matthias Schlüter and Sarah Latus and Veronika Volgger and Christian Betz and Alexander Schlaefer},
                title = {Towards Automatic Lesion Classification in the Upper Aerodigestive Tract Using OCT and Deep Transfer Learning Methods},
                booktitle = {International Congress and Exhibition on Computer Assisted Radiology and Surgery},
                journal = {arXiv preprint arXiv:1902.03618},
                year = {2019},
                url = {https://arxiv.org/abs/1902.03618}
              }
              
            • N. Gessert, T. Sentker, F. Madesta, R. Schmitz, H. Kniep, I. Baltruschat, R. Werner, A. Schlaefer (2019), "Skin Lesion Classification Using CNNs with Patch-Based Attention and Diagnosis-Guided Loss Weighting", IEEE Transactions on Biomedical Engineering. ,1-1. Institute of Electrical and Electronics Engineers (IEEE). [Abstract] [BibTeX] [DOI] [URL]
            • Abstract: Objective: This work addresses two key problems of skin lesion classification. The first problem is the effective use of high-resolution images with pretrained standard architectures for image classification. The second problem is the high class imbalance encountered in real-world multi-class datasets. Methods: To use high-resolution images, we propose a novel patch-based attention architecture that provides global context between small, high-resolution patches. We modify three pretrained architectures and study the performance of patch-based attention. To counter class imbalance problems, we compare oversampling, balanced batch sampling, and class-specific loss weighting. Additionally, we propose a novel diagnosis-guided loss weighting method which takes the method used for ground-truth annotation into account. Results: Our patch-based attention mechanism outperforms previous methods and improves the mean sensitivity by 7%. Class balancing significantly improves the mean sensitivity and we show that our diagnosis-guided loss weighting method improves the mean sensitivity by 3% over normal loss balancing. Conclusion: The novel patch-based attention mechanism can be integrated into pretrained architectures and provides global context between local patches while outperforming other patch-based methods. Hence, pretrained architectures can be readily used with high resolution images without downsampling. The new diagnosis-guided loss weighting method outperforms other methods and allows for effective training when facing class imbalance. Significance: The proposed methods improve automatic skin lesion classification. They can be extended to other clinical applications where high-resolution image data and class imbalance are relevant.
              BibTeX:
              @article{Gessert2019g,
                author = {Nils Gessert and Thilo Sentker and Frederic Madesta and Rudiger Schmitz and Helge Kniep and Ivo Baltruschat and Rene Werner and Alexander Schlaefer},
                title = {Skin Lesion Classification Using CNNs with Patch-Based Attention and Diagnosis-Guided Loss Weighting},
                journal = {IEEE Transactions on Biomedical Engineering},
                publisher = {Institute of Electrical and Electronics Engineers (IEEE)},
                year = {2019},
                pages = {1--1},
                url = {https://arxiv.org/abs/1905.02793},
                doi = {10.1109/TBME.2019.2915839}
              }
              
            • N. Gessert, L. Wittig, D. Drömann, T. Keck, A. Schlaefer, D. B. Ellebrecht (2019), "Feasibility of Colon Cancer Detection in Confocal Laser Microscopy Images Using Convolution Neural Networks", CoRR. Vol. abs/1812.01464 [BibTeX] [URL]
            • BibTeX:
              @article{Gessert2019d,
                author = {Nils Gessert and Lukas Wittig and Daniel Drömann and Tobias Keck and Alexander Schlaefer and David B. Ellebrecht},
                title = {Feasibility of Colon Cancer Detection in Confocal Laser Microscopy Images Using Convolution Neural Networks},
                journal = {CoRR},
                year = {2019},
                volume = {abs/1812.01464},
                url = {http://arxiv.org/abs/1812.01464}
              }
              
            • J. Krueger, R. Opfer, N. Gessert, A.-C. Ostwaldt, C. Walker-Egger, P. Manogaran, C. Wang, M. Barnett, A. Schlaefer, S. Schippling (2019), "Fully automated lesion segmentation using heavily trained 3D convolutional neural networks are equivalent to manual expert segmentations", Multiple Sclerosis Journal. Vol. 25(2_suppl),844-845. [BibTeX] [DOI] [URL]
            • BibTeX:
              @article{Krueger2019a,
                author = {Julia Krueger and Roland Opfer and Nils Gessert and Ann-Christin Ostwaldt and Christin Walker-Egger and Praveena Manogaran and C. Wang and M. Barnett and Aleaxander Schlaefer and S. Schippling},
                title = {Fully automated lesion segmentation using heavily trained 3D convolutional neural networks are equivalent to manual expert segmentations},
                journal = {Multiple Sclerosis Journal},
                year = {2019},
                volume = {25},
                number = {2_suppl},
                pages = {844-845},
                url = {https://journals.sagepub.com/doi/full/10.1177/1352458519872904#articleCitationDownloadContainer},
                doi = {10.1177/1352458519872904}
              }
              
            • J. Krueger, R. Opfer, N. Gessert, S. Schippling, A.-C. Ostwaldt, C. Walker-Egger, P. Manogaran, A. Schlaefer (2019), "Fully Automated Longitudinal Segmentation of new or Enlarging Multiple Sclerosis (MS) Lesions Using 3D Convolutional Neural Networks", Clinical Neuroradiology., Sep, 2019. Vol. 29(Suppl 1),10. [BibTeX] [URL]
            • BibTeX:
              @article{Krueger2019,
                author = {Julia Krueger and Roland Opfer and Nils Gessert and Sven Schippling and Ann-Christin Ostwaldt and Christine Walker-Egger and Praveena Manogaran and Alexander Schlaefer},
                title = {Fully Automated Longitudinal Segmentation of new or Enlarging Multiple Sclerosis (MS) Lesions Using 3D Convolutional Neural Networks},
                journal = {Clinical Neuroradiology},
                year = {2019},
                volume = {29},
                number = {Suppl 1},
                pages = {10},
                url = {https://www.neurorad.de/files/content/s00062-019-00826-9.pdf}
              }
              
            • S. Latus, F. Griese, M. Schlüter, C. Otte, M. Möddel, M. Graeser, T. Saathoff, T. Knopp, A. Schlaefer (2019), "Bimodal intravascular volumetric imaging combining OCT and MPI", Medical Physics., feb, 2019. Vol. 46(3),1371-1383. Wiley. [Abstract] [BibTeX] [DOI] [URL]
            • Abstract: Purpose Intravascular optical coherence tomography (IVOCT) is a catheter-based image modality allowing for high-resolution imaging of vessels. It is based on a fast sequential acquisition of A-scans with an axial spatial resolution in the range of 5-10 µm, that is, one order of magnitude higher than in conventional methods like intravascular ultrasound or computed tomography angiography. However, position and orientation of the catheter in patient coordinates cannot be obtained from the IVOCT measurements alone. Hence, the pose of the catheter needs to be established to correctly reconstruct the three-dimensional vessel shape. Magnetic particle imaging (MPI) is a three-dimensional tomographic, tracer-based, and radiation-free image modality providing high temporal resolution with unlimited penetration depth. Volumetric MPI images are angiographic and hence suitable to complement IVOCT as a comodality. We study simultaneous bimodal IVOCT MPI imaging with the goal of estimating the IVOCT pullback path based on the 3D MPI data. Methods We present a setup to study and evaluate simultaneous IVOCT and MPI image acquisition of differently shaped vessel phantoms. First, the influence of the MPI tracer concentration on the optical properties required for IVOCT is analyzed. Second, using a concentration allowing for simultaneous imaging, IVOCT and MPI image data are acquired sequentially and simultaneously. Third, the luminal centerline is established from the MPI image volumes and used to estimate the catheter pullback trajectory for IVOCT image reconstruction. The image volumes are compared to the known shape of the phantoms. Results We were able to identify a suitable MPI tracer concentration of 2.5 mmol/L with negligible influence on the IVOCT signal. The pullback trajectory estimated from MPI agrees well with the centerline of the phantoms. Its mean absolute error ranges from 0.27 to 0.28 mm and from 0.25 mm to 0.28 mm for sequential and simultaneous measurements, respectively. Likewise, reconstructing the shape of the vessel phantoms works well with mean absolute errors for the diameter ranging from 0.11 to 0.21 mm and from 0.06 to 0.14 mm for sequential and simultaneous measurements, respectively. Conclusions Magnetic particle imaging can be used in combination with IVOCT to estimate the catheter trajectory and the vessel shape with high precision and without ionizing radiation.
              BibTeX:
              @article{Latus2019,
                author = {Sarah Latus and Florian Griese and Matthias Schlüter and Christoph Otte and Martin Möddel and Matthias Graeser and Thore Saathoff and Tobias Knopp and Alexander Schlaefer},
                title = {Bimodal intravascular volumetric imaging combining OCT and MPI},
                journal = {Medical Physics},
                publisher = {Wiley},
                year = {2019},
                volume = {46},
                number = {3},
                pages = {1371-1383},
                url = {https://aapm.onlinelibrary.wiley.com/doi/abs/10.1002/mp.13388},
                doi = {10.1002/mp.13388}
              }
              
            • S. Latus, M. Neidhardt, M. Lutz, N. Gessert, N. Frey, A. Schlaefer (2019), "Quantitative Analysis of 3D Artery Volume Reconstructions Using Biplane Angiography and Intravascular OCT Imaging", In 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)., jul, 2019. ,6004-6007. IEEE. [BibTeX] [DOI]
            • BibTeX:
              @inproceedings{Latus2019a,
                author = {Sarah Latus and Maximilian Neidhardt and Matthias Lutz and Nils Gessert and Norbert Frey and Alexander Schlaefer},
                title = {Quantitative Analysis of 3D Artery Volume Reconstructions Using Biplane Angiography and Intravascular OCT Imaging},
                booktitle = {2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
                publisher = {IEEE},
                year = {2019},
                pages = {6004-6007},
                doi = {10.1109/embc.2019.8857712}
              }
              
            • M. Laves, S. Latus, J. Bergmeier, T. Ortmaier, L. A. Kahrs, A. Schlaefer (2019), "Endoscopic versus volumetric OCT imaging of mastoid bone structure for pose estimation in minimally invasive cochlear implant Radiosurgery", International Journal of Computer Assisted Radiology and Surgery., In International Journal of Computer Assisted Radiology and Surgery., May, 2019. Vol. 14(S1),137-137. Springer Science and Business Media LLC. [BibTeX] [DOI] [URL]
            • BibTeX:
              @article{Laves2019,
                author = {M.H. Laves and S. Latus and J. Bergmeier and T. Ortmaier and L. A. Kahrs and A. Schlaefer},
                title = {Endoscopic versus volumetric OCT imaging of mastoid bone structure for pose estimation in minimally invasive cochlear implant Radiosurgery},
                booktitle = {International Journal of Computer Assisted Radiology and Surgery},
                journal = {International Journal of Computer Assisted Radiology and Surgery},
                publisher = {Springer Science and Business Media LLC},
                year = {2019},
                volume = {14},
                number = {S1},
                pages = {137-137},
                url = {http://arxiv.org/abs/1901.06490},
                doi = {10.1007/s11548-019-01969-3}
              }
              
            • M. Schlüter, M. M. Fuh, S. Maier, C. Otte, P. Kiani, N.-O. Hansen, R. J. D. Miller, H. Schlüter, A. Schlaefer (2019), "Towards OCT-Navigated Tissue Ablation with a Picosecond Infrared Laser (PIRL) and Mass-Spectrometric Analysis", In Annual International Conference of the IEEE Engineering in Medicine and Biology Society. ,158-161. [Abstract] [BibTeX] [DOI]
            • Abstract: Medical lasers are commonly used in interventions to ablate tumor tissue. Recently, the picosecond infrared laser has been introduced, which greatly decreases damaging of surrounding healthy tissue. Further, its ablation plume contains intact biomolecules which can be collected and analyzed by mass spectrometry. This allows for a specific chracterization of the tissue. For a precise treatment, however, a suitable guidance is needed. Further, spatial information is required if the tissue is to be characterized at different parts in the ablated area. Therefore, we propose a system which employs optical coherence tomography as the guiding imaging modality. We describe a prototypical system which provides automatic ablation of areas defined in the image data. For this purpose, we use a calibration with a robot which drives the laser fiber and collects the arising plume. We demonstrate our system on porcine tissue samples
              BibTeX:
              @inproceedings{Schlueter2019b,
                author = {Matthias Schlüter and Manke M. Fuh and Stephanie. Maier and Christoph Otte and Parnian Kiani and Nils-Owe Hansen and R. J. Dwayne Miller and Hartmut Schlüter and Alexander Schlaefer},
                title = {Towards OCT-Navigated Tissue Ablation with a Picosecond Infrared Laser (PIRL) and Mass-Spectrometric Analysis},
                booktitle = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society},
                year = {2019},
                pages = {158-161},
                doi = {10.1109/EMBC.2019.8856808}
              }
              
            • M. Schlüter, C. Fürweger, A. Schlaefer (2019), "Optimizing Configurations for 7-DoF Robotic Ultrasound Guidance in Radiotherapy of the Prostate", In Annual International Conference of the IEEE Engineering in Medicine and Biology Society. [Abstract] [BibTeX] [DOI]
            • Abstract: Robotic ultrasound guidance is promising for tracking of organ motion during radiotherapy treatments, but the radio-opaque robot and probe interfere with beam delivery. The effect on treatment plan quality can be mitigated by the use of a robot arm with kinematic redundancy, such that the robot is able to elude delivered beams during treatment by changing its configuration. However, these changes require robot motion close to the patient, lead to an increased treatment time, and require coordination with the beam delivery. We propose an optimization workflow which integrates the problem of selecting suitable robot configurations into the treatment plan optimization. Starting with a large set of candidate configurations, a minimal subset is determined which provides equivalent plan quality. Our results show that, typically, six configurations are sufficient for this purpose. Furthermore, we show that optimal configurations can be reused for dose planning of subsequent patients
              BibTeX:
              @inproceedings{Schlueter2019a,
                author = {Matthias Schlüter and Christoph Fürweger and Alexander Schlaefer},
                title = {Optimizing Configurations for 7-DoF Robotic Ultrasound Guidance in Radiotherapy of the Prostate},
                booktitle = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society},
                year = {2019},
                doi = {10.1109/EMBC.2019.8857245}
              }
              
            • M. Schlüter, C. Fürweger, A. Schlaefer (2019), "Optimizing robot motion for robotic ultrasound-guided radiation therapy", Physics in Medicine & Biology., oct, 2019. Vol. 64(19),195012. IOP Publishing. [Abstract] [BibTeX] [DOI]
            • Abstract: An important aspect of robotic radiation therapy is active compensation of target motion. Recently, ultrasound has been proposed to obtain real-time volumetric images of abdominal organ motion. One approach to realize flexible probe placement throughout the treatment fraction is based on a robotic arm holding the ultrasound probe. However, the probe and the robot holding it may obstruct some of the beams with a potentially adverse effect on the plan quality. This can be mitigated by using a kinematically redundant robot, which allows maintaining a steady pose of the ultrasound probe while moving its elbow in order to minimize beam blocking. Ultimately, the motion of both the beam source carrying and the ultrasound probe holding robot contribute to the overall treatment time. We propose an approach to optimize the motion and coordination of both robots based on a generalized traveling salesman problem. Furthermore, we study an application of the model to a prostate treatment scenario. Because the underlying optimization problem is hard, we compare results from a state-of-the-art heuristic solver and an approximation scheme with low computational effort. Our results show that integration of the robot holding the ultrasound probe is feasible with acceptable overhead in overall treatment time. For clinically realistic velocities of the robots, the overhead is less than 4% which is a small cost for the added benefit of continuous, volumetric, and non-ionizing tracking of organ motion over periodic X-ray-based tracking
              BibTeX:
              @article{Schlueter2019e,
                author = {Matthias Schlüter and Christoph Fürweger and Alexander Schlaefer},
                title = {Optimizing robot motion for robotic ultrasound-guided radiation therapy},
                journal = {Physics in Medicine & Biology},
                publisher = {IOP Publishing},
                year = {2019},
                volume = {64},
                number = {19},
                pages = {195012},
                doi = {10.1088/1361-6560/ab3bfb}
              }
              
            • M. Schlüter, S. Gerlach, C. Fürweger, A. Schlaefer (2019), "Analysis and Optimization of the Robot Setup for Robotic-Ultrasound-Guided Radiation Therapy", International Journal of Computer Assisted Radiology and Surgery. Vol. 14(8),1379-1387. [Abstract] [BibTeX] [DOI]
            • Abstract: Purpose Robotic ultrasound promises continuous, volumetric, and nonionizing tracking of organ motion during radiation therapy. However, placement of the robot is critical because it is radio-opaque and might severely influence the achievable dose distribution. Methods We propose two heuristic optimization strategies for automatic placement of an ultrasound robot around a patient. Considering a kinematically redundant robot arm, we compare a generic approach based on stochastic search and a more problem-specific segmentwise construction approach. The former allows for multiple elbow configurations while the latter is deterministic. Additionally, we study different objective functions guiding the search. Our evaluation is based on data for ten actual prostate cancer cases and we compare the resulting plan quality for both methods to manually chosen robot configurations previously proposed. Results The mean improvements in the treatment planning objective value with respect to the best manually selected robot position and a single elbow configuration range from 8.2 % to 32.8 % and 8.5 % to 15.5 % for segmentwise construction and stochastic search, respectively. Considering three different elbow configurations, the stochastic search results in better objective values in 80 % of the cases, with 30 % being significantly better. The optimization strategies are robust with respect to beam sampling and transducer orientation and using previous optimization results as starting point for stochastic search typically results in better solutions compared to random starting points. Conclusions We propose a robust and generic optimization scheme, which can be used to optimize the robot placement for robotic ultrasound guidance in radiation therapy. The automatic optimization further mitigates the impact of robotic ultrasound on the treatment plan quality.
              BibTeX:
              @article{Schlueter2019c,
                author = {Matthias Schlüter and Stefan Gerlach and Christoph Fürweger and Alexander Schlaefer},
                title = {Analysis and Optimization of the Robot Setup for Robotic-Ultrasound-Guided Radiation Therapy},
                journal = {International Journal of Computer Assisted Radiology and Surgery},
                year = {2019},
                volume = {14},
                number = {8},
                pages = {1379-1387},
                doi = {10.1007/s11548-019-02009-w}
              }
              
            • M. Schlüter, S. Gerlach, C. Fürweger, A. Schlaefer (2019), "Analysis and Optimization of the Robot Setup for Robotic-Ultrasound-Guided Radiation Therapy", In Presented at International Congress and Exhibition on Computer Assisted Radiology and Surgery. [BibTeX]
            • BibTeX:
              @inproceedings{Schlueter2019d,
                author = {Matthias Schlüter and Stefan Gerlach and Christoph Fürweger and Aleander Schlaefer},
                title = {Analysis and Optimization of the Robot Setup for Robotic-Ultrasound-Guided Radiation Therapy},
                booktitle = {Presented at International Congress and Exhibition on Computer Assisted Radiology and Surgery},
                year = {2019}
              }
              
            • M. Schlüter, C. Otte, T. Saathoff, N. Gessert, A. Schlaefer (2019), "Feasibility of a markerless tracking system based on optical coherence tomography", In SPIE Medical Imaging. [Abstract] [BibTeX] [DOI] [URL]
            • Abstract: Clinical tracking systems are popular but typically require specific tracking markers. During the last years, scanning speed of optical coherence tomography (OCT) has increased to A-scan rates above 1MHz allowing to acquire volume scans of moving objects. Therefore, we propose a markerless tracking system based on OCT to obtain small volumetric images including information of sub-surface structures at high spatio-temporal resolution. In contrast to conventional vision based approaches, this allows identifying natural landmarks even for smooth and homogeneous surfaces. We describe the optomechanical setup and process ow to evaluate OCT volumes for translations and accordingly adjust the position of the field-of-view to follow moving samples. While our current setup is still preliminary, we demonstrate tracking of motion transversal to the OCT beam of up to 20mms with errors around 0:2mm and even better for some scenarios. Tracking is evaluated on a clearly structured and on a homogeneous phantom as well as on actual tissue samples. The results show that OCT is promising for fast and precise tracking of smooth, monochromatic objects in medical scenarios.
              BibTeX:
              @inproceedings{Schlueter2019,
                author = {Matthias Schlüter and Christoph Otte and Thore Saathoff and Nils Gessert and Alexander Schlaefer},
                title = {Feasibility of a markerless tracking system based on optical coherence tomography},
                booktitle = {SPIE Medical Imaging},
                year = {2019},
                url = {https://arxiv.org/abs/1810.12355v1},
                doi = {10.1117/12.2512178}
              }
              
            • F. Sommer, L. Bargsten, A. Schlaefer (2019), "IVUS-Simulation for Improving Segmentation Performance of Neural Networks via Data Augmentation", CURAC 2019 Tagungsband Reutlingen., In CURAC 2019 Tagungsband Reutlingen., Sep, 2019. ,47-51. [Abstract] [BibTeX] [URL]
            • Abstract: Convolutional neural networks (CNNs) produce promising results when applied to a wide range of medical imaging tasks including the segmentation of tissue structures like the artery lumen and wall layers in intravascular ultrasound (IVUS) image data. However, large annotated datasets are needed for training to achieve sufficient performances. To increase the dataset size, data augmentation techniques like random image transformations are commonly used. In this work, we propose a new systematic approach to generate artificial IVUS image data with the ultrasound simulation software Field II in order to perform data augmentation. A simulation model was systematically tuned to a clinical data set based on the Frechet Inception Distance (FID). We show that the segmentation performance of a state-of-the-art CNN with U-Net architecture improves when pre-trained with our synthetic IVUS data
              BibTeX:
              @inproceedings{Sommer2019,
                author = {Franziska Sommer and Lennart Bargsten and Alexander Schlaefer},
                title = {IVUS-Simulation for Improving Segmentation Performance of Neural Networks via Data Augmentation},
                booktitle = {CURAC 2019 Tagungsband Reutlingen},
                journal = {CURAC 2019 Tagungsband Reutlingen},
                year = {2019},
                pages = {47-51},
                url = {https://www.curac.org/images/advportfoliopro/images/CURAC2019/Tagungsband_Reutlingen}
              }
              
            • 1*) Sven-Thomas Antoni, 2*) Sascha Lehmann, 2) Sibylle Schupp, H. G. 2. I. f. S. S. H. U. o. T. H. G. *. A. c. e. Alexander Schlaefer 1) 1 Institute of Medical Technology, Hamburg University of Technology (2019), "An Online Model Checking Approach to Soft-Tissue Detection for Rupture", CURAC 2019 Tagungsband Reutlingen., In CURAC 2019 Tagungsband Reutlingen., Sep, 2019. Vol. 1,83-88. [Abstract] [BibTeX]
            • Abstract: Robotic needle insertion based on haptic feedback can be imprecise and error-prone, especially for sudden force changes in case of ruptures. To predict rupture events early during tissue deformation, knowledge is required about the type and characteristics of the tissues involved. Several approaches to this exist and increase system complexity by including additional sensors or imaging modalities. We introduce a new approach based on formal model checking, which allows us to identify tissue by a directed search through the state space of a needle insertion model. Using force data measured at the needle shaft during cutting motion, our method identifies the most probable tissue iteratively at run-time, based on a priori information of possible tissues. In a case study of needle insertions into gelatin phantoms with varying gelatin-water ratios, our approach allowed 90.7% correct identifications and may thus be considered to identify tissue during robotic needle insertion
              BibTeX:
              @inproceedings{SvenThomasAntoni2019,
                author = {Sven-Thomas Antoni 1*) and Sascha Lehmann 2*) and Sibylle Schupp 2) and Alexander Schlaefer 1) 1 Institute of Medical Technology, Hamburg University of Technology, Hamburg, Germany 2 Institute for Software Systems, Hamburg University of Technology, Hamburg, Germany * Authors contributed equally},
                title = {An Online Model Checking Approach to Soft-Tissue Detection for Rupture},
                booktitle = {CURAC 2019 Tagungsband Reutlingen},
                journal = {CURAC 2019 Tagungsband Reutlingen},
                year = {2019},
                volume = {1},
                pages = {83-88}
              }
              

              2018

              • S.-T. Antoni, S. Lehmann, M. Neidhardt, K. Fehrs, C. Ruprecht, F. Kording, G. Adam, S. Schupp, A. Schlaefer (2018), "Model checking for trigger loss detection during Doppler ultrasound-guided fetal cardiovascular MRI", International Journal of Computer Assisted Radiology and Surgery., aug, 2018. Vol. 13(11),1755-1766. Springer Science and Business Media LLC. [Abstract] [BibTeX] [DOI] [URL]
              • Abstract: Ultrasound (US) is the state of the art in prenatal diagnosis to depict fetal heart diseases. Cardiovascular magnetic resonance imaging (CMRI) has been proposed as a complementary diagnostic tool. Currently, only trigger-based methods allow the temporal and spatial resolutions necessary to depict the heart over time. Of these methods, only Doppler US (DUS)-based triggering is usable with higher field strengths. DUS is sensitive to motion. This may lead to signal and, ultimately, trigger loss. If too many triggers are lost, the image acquisition is stopped, resulting in a failed imaging sequence. Moreover, losing triggers may prolong image acquisition. Hence, if no actual trigger can be found, injected triggers are added to the signal based on the trigger history.
                BibTeX:
                @article{Antoni2018,
                  author = {Sven-Thomas Antoni and Sascha Lehmann and Maximilian Neidhardt and Kai Fehrs and Christian Ruprecht and Fabian Kording and Gerhard Adam and Sibylle Schupp and Alexander Schlaefer},
                  title = {Model checking for trigger loss detection during Doppler ultrasound-guided fetal cardiovascular MRI},
                  journal = {International Journal of Computer Assisted Radiology and Surgery},
                  publisher = {Springer Science and Business Media LLC},
                  year = {2018},
                  volume = {13},
                  number = {11},
                  pages = {1755-1766},
                  url = {https://doi.org/10.1007/s11548-018-1832-5},
                  doi = {10.1007/s11548-018-1832-5}
                }
                
              • N. Gessert, J. Beringhoff, C. Otte, A. Schlaefer (2018), "Force estimation from OCT volumes using 3D CNNs", International Journal of Computer Assisted Radiology and Surgery., may, 2018. Vol. 13(7),1073–1082. Springer Science and Business Media LLC. [Abstract] [BibTeX] [DOI] [URL]
              • Abstract: Estimating the interaction forces of instruments and tissue is of interest, particularly to provide haptic feedback during robot-assisted minimally invasive interventions. Different approaches based on external and integrated force sensors have been proposed. These are hampered by friction, sensor size, and sterilizability. We investigate a novel approach to estimate the force vector directly from optical coherence tomography image volumes
                BibTeX:
                @article{Gessert2018a,
                  author = {Nils Gessert and Jens Beringhoff and Christoph Otte and Alexander Schlaefer},
                  title = {Force estimation from OCT volumes using 3D CNNs},
                  journal = {International Journal of Computer Assisted Radiology and Surgery},
                  publisher = {Springer Science and Business Media LLC},
                  year = {2018},
                  volume = {13},
                  number = {7},
                  pages = {1073–1082},
                  url = {https://arxiv.org/abs/1804.10002},
                  doi = {10.1007/s11548-018-1777-8}
                }
                
              • N. Gessert, M. Heyder, S. Latus, D. M. Leistner, Y. S. Abdelwahed, M. Lutz, A. Schlaefer (2018), "Adversarial Training for Patient-Independent Feature Learning with IVOCT Data for Plaque Classification", In International Conference on Medical Imaging with Deep Learning., May, 2018. [Abstract] [BibTeX] [URL]
              • Abstract: Deep learning methods have shown impressive results for a variety of medical problems over the last few years. However, datasets tend to be small due to time-consuming annotation. As datasets with different patients are often very heterogeneous generalization to new patients can be difficult. This is complicated further if large differences in image acquisition can occur, which is common during intravascular optical coherence tomography for coronary plaque imaging. We address this problem with an adversarial training strategy where we force a part of a deep neural network to learn features that are independent of patient- or acquisition-specific characteristics. We compare our regularization method to typical data augmentation strategies and show that our approach improves performance for a small medical dataset.
                BibTeX:
                @conference{Gessert2018d,
                  author = {Nils Gessert and Markus Heyder and Sarah Latus and David M. Leistner and Youssef S. Abdelwahed and Matthias Lutz and Alexander Schlaefer},
                  title = {Adversarial Training for Patient-Independent Feature Learning with IVOCT Data for Plaque Classification},
                  booktitle = {International Conference on Medical Imaging with Deep Learning},
                  year = {2018},
                  url = {https://arxiv.org/abs/1805.06223}
                }
                
              • N. Gessert, M. Heyder, S. Latus, M. Lutz, A. Schlaefer (2018), "Plaque Classification in Coronary Arteries from IVOCT Images Using Convolutional Neural Networks and Transfer Learning", International Journal of Computer Assisted Radiology and Surgery., In (Suppl1) International Journal of CARS'2018., may, 2018. Vol. 13(S1),1-273. Springer Science and Business Media LLC. [BibTeX] [DOI] [URL]
              • BibTeX:
                @article{Gessert2018,
                  author = {N. Gessert and M. Heyder and S. Latus and M. Lutz and A. Schlaefer},
                  title = {Plaque Classification in Coronary Arteries from IVOCT Images Using Convolutional Neural Networks and Transfer Learning},
                  booktitle = {(Suppl1) International Journal of CARS'2018},
                  journal = {International Journal of Computer Assisted Radiology and Surgery},
                  publisher = {Springer Science and Business Media LLC},
                  year = {2018},
                  volume = {13},
                  number = {S1},
                  pages = {1--273},
                  url = {https://doi.org/10.1007/s11548-018-1766-y},
                  doi = {10.1007/s11548-018-1766-y}
                }
                
              • N. Gessert, T. Priegnitz, T. Saathoff, S.-T. Antoni, D. Meyer, M. F. Hamann, K.-P. Jünemann, C. Otte, A. Schlaefer (2018), "Needle Tip Force Estimation using an OCT Fiber and a Fused convGRU-CNN Architecture - MICCAI 2018", In International Conference on Medical Image Computing and Computer-Assisted Intervention. Vol. 11073,222-229, Spotlight Talk. [Abstract] [BibTeX] [URL]
              • Abstract: Needle insertion is common during minimally invasive interventions such as biopsy or brachytherapy. During soft tissue needle insertion, forces acting at the needle tip cause tissue deformation and needle deflection. Accurate needle tip force measurement provides information on needle-tissue interaction and helps detecting and compensating potential misplacement. For this purpose we introduce an image-based needle tip force estimation method using an optical fiber imaging the deformationof an epoxy layer below the needle tip over time. For calibration andforce estimation, we introduce a novel deep learning-based fused convolutionalGRU-CNN model which effectively exploits the spatio-temporaldata structure. The needle is easy to manufacture and our model achieves a mean absolute error of 176 ± 150 mN with a cross-correlation coefficientof 09996, clearly outperforming other methods. We test needleswith different materials to demonstrate that the approach can be adaptedfor different sensitivities and force ranges. Furthermore, we validate our approach in an ex-vivo prostate needle insertion scenario
                BibTeX:
                @conference{Gessert2018c,
                  author = {Nils Gessert and Torben Priegnitz and Thore Saathoff and Sven-Thomas. Antoni and David Meyer and Moritz Franz Hamann and Klaus-Peter Jünemann and Christoph Otte and Alexander Schlaefer},
                  title = {Needle Tip Force Estimation using an OCT Fiber and a Fused convGRU-CNN Architecture - MICCAI 2018},
                  booktitle = {International Conference on Medical Image Computing and Computer-Assisted Intervention},
                  year = {2018},
                  volume = {11073},
                  pages = {222-229, Spotlight Talk},
                  url = {https://arxiv.org/abs/1805.11911}
                }
                
              • N. Gessert, M. Schlüter, A. Schlaefer (2018), "A deep learning approach for pose estimation from volumetric OCT data", Medical Image Analysis., may, 2018. Vol. 46,162-179. Elsevier BV. [Abstract] [BibTeX] [DOI] [URL]
              • Abstract: Tracking the pose of instruments is a central problem in image-guided surgery. For microscopic scenarios, optical coherence tomography (OCT) is increasingly used as an imaging modality. OCT is suitable for accurate pose estimation due to its micrometer range resolution and volumetric field of view. However, OCT image processing is challenging due to speckle noise and reflection artifacts in addition to the images 3D nature. We address pose estimation from OCT volume data with a new deep learning-based tracking framework. For this purpose, we design a new 3D convolutional neural network (CNN) architecture to directly predict the 6D pose of a small marker geometry from OCT volumes. We use a hexapod robot to automatically acquire labeled data points which we use to train 3D CNN architectures for multi-output regression. We use this setup to provide an in-depth analysis on deep learning-based pose estimation from volumes. Specifically, we demonstrate that exploiting volume information for pose estimation yields higher accuracy than relying on 2D representations with depth information. Supporting this observation, we provide quantitative and qualitative results that 3D CNNs effectively exploit the depth structure of marker objects.Regarding the deep learning aspect, we present efficient design principles for 3D CNNs, making use of insights from the 2D deep learning community. In particular, we present Inception3D as a new architecture which performs best for our application. We show that our deep learning approach reaches errors at our ground-truth labelś resolution. We achieve a mean average error of 14.89 \pm 9.3micrometre and 0.096 \pm 0.072degree for position and orientation learning, respective.
                BibTeX:
                @article{Gessert2018b,
                  author = {Nils Gessert and Matthias Schlüter and Alexander Schlaefer},
                  title = {A deep learning approach for pose estimation from volumetric OCT data},
                  journal = {Medical Image Analysis},
                  publisher = {Elsevier BV},
                  year = {2018},
                  volume = {46},
                  pages = {162--179},
                  url = {https://arxiv.org/abs/1803.03852},
                  doi = {10.1016/j.media.2018.03.002}
                }
                
              • N. Gessert, T. Sentker, F. Madesta, R. Schmitz, H. Kniep, I. Baltruschat, R. Werner, A. Schlaefer (2018), "Skin Lesion Diagnosis using Ensembles, Unscaled Multi-Crop Evaluation and Loss Weighting", ArXiv e-prints., In International Conference on Medical Imaging with Deep Learning., May, 2018. ,Oral. Best challenge submission with public data only. Overall 2nd placed team. [Abstract] [BibTeX] [URL]
              • Abstract: Deep learning methods have shown impressive results for a variety of medical problems over the last few years. However, datasets tend to be small due to time-consuming annotation. As datasets with different patients are often very heterogeneous generalization to new patients can be difficult. This is complicated further if large differences in image acquisition can occur, which is common during intravascular optical coherence tomography for coronary plaque imaging. We address this problem with an adversarial training strategy where we force a part of a deep neural network to learn features that are independent of patient- or acquisition-specific characteristics. We compare our regularization method to typical data augmentation strategies and show that our approach improves performance for a small medical dataset
                BibTeX:
                @article{Gessert2018e,
                  author = {Nils Gessert and Thilo Sentker and Frederic Madesta and Rüdiger Schmitz and Helge Kniep and Ivo Baltruschat and René Werner and Alexander Schlaefer},
                  title = {Skin Lesion Diagnosis using Ensembles, Unscaled Multi-Crop Evaluation and Loss Weighting},
                  booktitle = {International Conference on Medical Imaging with Deep Learning},
                  journal = {ArXiv e-prints},
                  year = {2018},
                  pages = {Oral. Best challenge submission with public data only. Overall 2nd placed team},
                  url = {https://arxiv.org/abs/1808.01694}
                }
                
              • F. Griese, S. Latus, M. Gräser, M. Möddel, M. Schlüter, C. Otte, T. Saathoff, T. Knopp, A. Schlaefer (2018), "Stenosis analysis by synergizing MPI and intravascular OCT", In International Workshop on Magnetic Particle Imaging. ,217-218. [BibTeX]
              • BibTeX:
                @inproceedings{Griese2018,
                  author = {F. Griese and S. Latus and M. Gräser and M. Möddel and M. Schlüter and C. Otte and T. Saathoff and T. Knopp and A. Schlaefer},
                  title = {Stenosis analysis by synergizing MPI and intravascular OCT},
                  booktitle = {International Workshop on Magnetic Particle Imaging},
                  year = {2018},
                  pages = {217-218}
                }
                
              • S. Latus, T. Knopp, A. Schlaefer, F. Griese, M. Gräser, M. Möddel, M. Schlüter, C. Otte, N. Gessert, T. Saathoff (2018), "Towards bimodal intravascular OCT MPI volumetric imaging", Proc.SPIE., In Medical Imaging 2018: Physics of Medical Imaging., mar, 2018. Vol. 10573E,10573E. SPIE. [Abstract] [BibTeX] [DOI] [URL]
              • Abstract: Magnetic Particle Imaging (MPI) is a tracer-based tomographic non-ionizing imaging method providing fully three-dimensional spatial information at a high temporal resolution without any limitation in penetration depth. One challenge for current preclinical MPI systems is its modest spatial resolution in the range of 1 mm - 5 mm. Intravascular Optical Coherence Tomography (IVOCT) on the other hand, has a very high spatial and temporal resolution, but it does not provide an accurate 3D positioning of the IVOCT images. In this work, we will show that MPI and OCT can be combined to reconstruct an accurate IVOCT volume. A center of mass trajectory is estimated from the MPI data as a basis to reconstruct the poses of the IVOCT images. The feasibility of bimodal IVOCT and MPI imaging is demonstrated with a series of 3D printed vessel phantoms
                BibTeX:
                @inproceedings{Latus2018,
                  author = {Sarah Latus and Tobias Knopp and Alexander Schlaefer and Florian Griese and Matthias Gräser and Martin Möddel and Matthias Schlüter and Christoph Otte and Nils Gessert and Thore Saathoff},
                  editor = {Guang-Hong Chen and Joseph Y. Lo and Taly Gilat Schmidt},
                  title = {Towards bimodal intravascular OCT MPI volumetric imaging},
                  booktitle = {Medical Imaging 2018: Physics of Medical Imaging},
                  journal = {Proc.SPIE},
                  publisher = {SPIE},
                  year = {2018},
                  volume = {10573E},
                  pages = {10573E},
                  url = {https://doi.org/10.1117/12.2293497},
                  doi = {10.1117/12.2293497}
                }
                
              • S. Lehmann, S.-T. Antoni, A. Schlaefer, S. Schupp (2018), "A Quantitative Metric Temporal Logic for Execution-Time Constrained Verification", In Model-Based Design of Cyber Physical Systems (CyPhy'18). Torino, Italy, Oct 2018, 2018. [BibTeX]
              • BibTeX:
                @conference{Lehmann2018,
                  author = {Sascha Lehmann and Sven-Thomas. Antoni and Alexander Schlaefer and Sibylle Schupp},
                  title = {A Quantitative Metric Temporal Logic for Execution-Time Constrained Verification},
                  booktitle = {Model-Based Design of Cyber Physical Systems (CyPhy'18)},
                  year = {2018}
                }
                
              • J. Padberg, A. Schlaefer, S. Schupp (2018), "Ein Ansatz zur nachvollziehbaren Verifikation medizinisch-cyber-physikalischer Systeme", In Software Engineering und Software Management 2018. Bonn ,209-210. Gesellschaft für Informatik. [Abstract] [BibTeX]
              • Abstract: Medizinische cyberphysikalische Systeme erfordern einerseits die Adaption an patientenindividuelle Parameter während einer Behandlung und andererseits den Nachweis eines sicheren Systemverhaltens. Wir schlagen vor, Nachweisbarkeit mittels Online Model-Checking und Nachvollziehbarkeit durch Anwendung von regelbasierten Transformationen zu verbinden.
                BibTeX:
                @inproceedings{Padberg2018,
                  author = {Julia Padberg and Alexander Schlaefer and Sibylle Schupp},
                  editor = {Matthias Tichy and Eric Bodden and Marco Kuhrmann and Stefan Wagner and Jan-Philipp Steghöfer},
                  title = {Ein Ansatz zur nachvollziehbaren Verifikation medizinisch-cyber-physikalischer Systeme},
                  booktitle = {Software Engineering und Software Management 2018},
                  publisher = {Gesellschaft für Informatik},
                  year = {2018},
                  pages = {209-210}
                }
                
              • O. Rajput*, N. Gessert*, M. Gromniak, L. Matthäus, A. Schlaefer (2018), "Towards Head Motion Compensation Using Multi-Scale Convolutional Neural Networks", 17. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboter Assistierte Chirurgie., In 17. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboter Assistierte Chirurgie. ,138-141 *Shared First Authors. [Abstract] [BibTeX] [URL]
              • Abstract: Head pose estimation and tracking is useful in variety of medical applications. With the advent of RGBD cameras like Kinect, it has become feasible to do markerless tracking by estimating the head pose directly from the point clouds. One specific medical application is robot assisted transcranial magnetic stimulation (TMS) where any patient motion is compensated with the help of a robot. For increased patient comfort, it is important to track the head without markers. In this regard, we address the head pose estimation problem using two different approaches. In the first approach, we build upon the more traditional approach of model based head tracking, where a head model is morphed according to the particular head to be tracked and the morphed model is used to track the head in the point cloud streams. In the second approach, we propose a new multi-scale convolutional neural network architecture for more accurate pose regression. Additionally, we outline a systematic data set acquisition strategy using a head phantom mounted on the robot and ground-truth labels generated using a highly accurate tracking system
                BibTeX:
                @inproceedings{Rajput*2018,
                  author = {Omer Rajput* and Nils Gessert* and Martin Gromniak and Lars Matthäus and Alexander Schlaefer},
                  title = {Towards Head Motion Compensation Using Multi-Scale Convolutional Neural Networks},
                  booktitle = {17. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboter Assistierte Chirurgie},
                  journal = {17. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboter Assistierte Chirurgie},
                  year = {2018},
                  pages = {138-141 *Shared First Authors},
                  note = {*Shared First Authors},
                  url = {http://adsabs.harvard.edu/abs/2018arXiv180703651R}
                }
                
              • M. Wendebourg, O. Rajput, A. Schlaefer (2018), "Detection of Simulated Clonic Seizures from Depth Camera Recordings", Journal of Image and Graphics., In Journal International Conference on Bioinformatics and Biomedical Technology., Dec, 2018. Vol. 6(2),88-94. EJournal Publishing. [Abstract] [BibTeX] [DOI] [URL]
              • Abstract: Tonic-clonic seizures pose a serious risk of injury to those afflicted. Therefore, patients both in home-based and residential care can require constant monitoring. Technical aids may help by alerting caregivers of detected seizures. So far, the usability of several sensor systems for seizure detection has been shown. However, most of these systems require some sensors to be physically attached to the patient or are limited with respect to their accuracy or robustness. Thus, we investigated the feasibility of using depth image sequences for the detection of seizure-like periodic motion. A static camera setup was utilized to monitor a limited region of interest comparable to a patientś bed during the night. Data of simulated limb motion including seizure-like movement was acquired with help of a robot moving a hand phantom both uncovered and covered by a duvet, ensuring the availability of a known ground truth. Subsequently, a characteristic of the recorded images which may be used to differentiate between normal and seizure-like motion was defined. Finally, linear discriminant analysis was applied to the determined characteristic. We found that the rapid detection of seizure-like periodic motion from depth image sequences is feasible even when the moving limb is covert by a blanket
                BibTeX:
                @article{Wendebourg2018,
                  author = {Mareike Wendebourg and and Omer Rajput and Alexander Schlaefer},
                  title = {Detection of Simulated Clonic Seizures from Depth Camera Recordings},
                  booktitle = {Journal International Conference on Bioinformatics and Biomedical Technology},
                  journal = {Journal of Image and Graphics},
                  publisher = {EJournal Publishing},
                  year = {2018},
                  volume = {6},
                  number = {2},
                  pages = {88-94},
                  url = {http://www.joig.org/index.php?m=content&c=index&a=show&catid=48&id=184},
                  doi = {10.18178/joig.6.2.88-94}
                }
                
              • T. Yu, F.-A. Siebert, A. Schlaefer (2018), "A stochastic optimization approach accounting for uncertainty in HDR brachytherapy needle placement", International Journal of Computer Assisted Radiology and Surgery., may, 2018. Vol. 13(S1),1-273. Springer Science and Business Media LLC. [Abstract] [BibTeX] [DOI] [URL]
              • Abstract: HDR brachytherapy requires the optimization of dwell times to shape the dose distribution according to the planning target volume (PTV) and organs at risk (OAR). Often, this is done after needle placement, i.e., when the needle geometry is already fixed. However, the flexibility in arranging the needles can impact the plan quality. We include the selection of the needle geometry in the inverse planning problem and study whether uncertainties due to tissue deformation and needle deflection can be handled by a novel stochastic optimization scheme. To evaluate and illustrate the approach we consider a prostate brachytherapy scenario. Particularly, we consider uncertainty in the needles tip position, e.g., due to overly conservative insertion to avoid risking bladder damage, due to errors defining the needle tip in the images, or due to the limited seed positioning repeatability of the afterloading unit.
                BibTeX:
                @article{Yu2018,
                  author = {Thomas Yu and F.-A. Siebert and Alexander Schlaefer},
                  title = {A stochastic optimization approach accounting for uncertainty in HDR brachytherapy needle placement},
                  journal = {International Journal of Computer Assisted Radiology and Surgery},
                  publisher = {Springer Science and Business Media LLC},
                  year = {2018},
                  volume = {13},
                  number = {S1},
                  pages = {1--273},
                  url = {https://doi.org/10.1007/s11548-018-1766-y},
                  doi = {10.1007/s11548-018-1766-y}
                }
                

                2017

                • R. Berndt, R. Rusch, L. Hummitzsch, M. Lutz, K. Heß, K. Huenges, B. Panholzer, C. Otte, A. Haneya, G. Lutter, A. Schlaefer, J. Cremer, J. Groß (2017), "Development of a new catheter prototype for laser thrombolysis under guidance of optical coherence tomography (OCT): validation of feasibility and efficacy in a preclinical model", Journal of Thrombosis and Thrombolysis., jan, 2017. Vol. 43(3),352-360. Springer Nature. [BibTeX] [DOI]
                • BibTeX:
                  @article{Berndt2017,
                    author = {R. Berndt and R. Rusch and L. Hummitzsch and M. Lutz and K. Heß and K. Huenges and B. Panholzer and C. Otte and A. Haneya and G. Lutter and A. Schlaefer and J. Cremer and J. Groß},
                    title = {Development of a new catheter prototype for laser thrombolysis under guidance of optical coherence tomography (OCT): validation of feasibility and efficacy in a preclinical model},
                    journal = {Journal of Thrombosis and Thrombolysis},
                    publisher = {Springer Nature},
                    year = {2017},
                    volume = {43},
                    number = {3},
                    pages = {352-360},
                    doi = {10.1007/s11239-016-1470-0}
                  }
                  
                • J. Dahmen, C. Otte, M. Fuh, S. Maier, M. Schlüter, S.-T. Antoni, N.-O. Hansen, R. Miller, H. Schlüter, A. Schlaefer (2017), "Massenspektrometrische Gewebeanalyse mittels OCT-navigierter PIR-Laserablation", In 16. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboter Assistierte Chirurgie. ,112-116. [BibTeX]
                • BibTeX:
                  @inproceedings{Dahmen2017,
                    author = {J. Dahmen and C. Otte and M. Fuh and S. Maier and M. Schlüter and S.-T. Antoni and N.-O. Hansen and R.J.D. Miller and H. Schlüter and A. Schlaefer},
                    title = {Massenspektrometrische Gewebeanalyse mittels OCT-navigierter PIR-Laserablation},
                    booktitle = {16. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboter Assistierte Chirurgie},
                    year = {2017},
                    pages = {112-116}
                  }
                  
                • N. Gdaniec, M. Schlüter, M. Möddel, M. Kaul, K. Krishnan, A. Schlaefer, T. Knopp (2017), "Detection and Compensation of Periodic Motion in Magnetic Particle Imaging", IEEE Transactions on Medical Imaging. Vol. 36(7),1511-1521. [Abstract] [BibTeX] [DOI]
                • Abstract: The temporal resolution of the tomographic imaging method magnetic particle imaging (MPI) is remarkably high. The spatial resolution is degraded for measured voltage signal with low signal-to-noise ratio, because the regularization in the image reconstruction step needs to be increased for system-matrix approaches and for deconvolution steps in x-space approaches. To improve the signal-to-noise ratio, blockwise averaging of the signal over time can be advantageous. However, since block-wise averaging decreases the temporal resolution, it prevents resolving the motion. In this paper, a framework for averaging motion-corrupted MPI raw data is proposed. The motion is considered to be periodic as it is the case for respiration and/or the heartbeat. The same state of motion is thus reached repeatedly in a time series exceeding the repetition time of the motion and can be used for averaging. As the motion process and the acquisition process are, in general, not synchronized, averaging of the captured MPI raw data corresponding to the same state of motion requires to shift the starting point of the individual frames. For high-frequency motion, a higher frame rate is potentially required. To address this issue, a binning method for using only parts of complete frames from a motion cycle is proposed that further reduces the motion artifacts in the final images. The frequency of motion is derived directly from the MPI raw data signal without the need to capture an additional navigator signal. Using a motion phantom, it is shown that the proposed method is capable of averaging experimental data with reduced motion artifacts. The methods are further validated on in-vivo data from mouse experiments to compensate the heartbeat
                  BibTeX:
                  @article{Gdaniec2017,
                    author = {N. Gdaniec and M. Schlüter and M. Möddel and M. Kaul and K. Krishnan and A. Schlaefer and T. Knopp},
                    title = {Detection and Compensation of Periodic Motion in Magnetic Particle Imaging},
                    journal = {IEEE Transactions on Medical Imaging},
                    year = {2017},
                    volume = {36},
                    number = {7},
                    pages = {1511-1521},
                    doi = {10.1109/TMI.2017.2666740}
                  }
                  
                • S. Gerlach, I. Kuhlemann, F. Ernst, C. Fürweger, A. Schlaefer (2017), "Impact of robotic ultrasound image guidance on plan quality in SBRT of the prostate", The British Journal of Radiology. Vol. 90(1078),20160926. [Abstract] [BibTeX] [DOI] [URL]
                • Abstract: Objective:Ultrasound provides good image quality, fast volumetric imaging and is established for abdominal image guidance. Robotic transducer placement may facilitate intrafractional motion compensation in radiation therapy. We consider integration with the CyberKnife and study whether the kinematic redundancy of a seven-degrees-of-freedom robot allows for acceptable plan quality for prostate treatments.Methods:Reference treatment plans were generated for 10 prostate cancer cases previously treated with the CyberKnife. Considering transducer and prostate motion by different safety margins, 10 different robot poses, and 3 different elbow configurations, we removed all beams colliding with robot or transducer. For each combination, plans were generated using the same strict dose constraints and the objective to maximize the target coverage. Additionally, plans for the union of all unblocked beams were generated.Results:In 9 cases the planning target coverage with the ultrasound robot was within 1.1 percentage points of the reference coverage. It was 1.7 percentage points for one large prostate. For one preferable robot position, kinematic redundancy decreased the average number of blocked beam directions from 23.1 to 14.5.Conclusion:The impact of beam blocking can largely be offset by treatment planning and using a kinematically redundant robot. Plan quality can be maintained by carefully choosing the ultrasound robot position and pose. For smaller planning target volumes the difference in coverage is negligible for safety margins of up to 35?mm.Advances in knowledge:Integrating a robot for online intrafractional image guidance based on ultrasound can be realized while maintaining acceptable plan quality for prostate cancer treatments with the CyberKnife.
                  BibTeX:
                  @article{Gerlach2017a,
                    author = {S. Gerlach and I. Kuhlemann and F. Ernst and C. Fürweger and A. Schlaefer},
                    title = {Impact of robotic ultrasound image guidance on plan quality in SBRT of the prostate},
                    journal = {The British Journal of Radiology},
                    year = {2017},
                    volume = {90},
                    number = {1078},
                    pages = {20160926},
                    note = {PMID: 28749165},
                    url = {https://doi.org/10.1259/bjr.20160926},
                    doi = {10.1259/bjr.20160926}
                  }
                  
                • S. Gerlach, I. Kuhlemann, P. Jauer, R. Bruder, F. Ernst, C. Fürweger, A. Schlaefer (2017), "Robotic ultrasound-guided SBRT of the prostate: feasibility with respect to plan quality", Int J Comput Assist Radiol Surg. online first 1-11.. Thesis at: Institute of Medical Technology, Hamburg University of Technology, Hamburg, Germany. schlaefer@tuhh.de.., Jan, 2017. Vol. 12(1),149-159. [Abstract] [BibTeX] [URL]
                • Abstract: Advances in radiation therapy delivery systems have enabled motion compensated SBRT of the prostate. A remaining challenge is the integration of fast, non-ionizing volumetric imaging. Recently, robotic ultrasound has been proposed as an intra-fraction image modality. We study the impact of integrating a light-weight robotic arm carrying an ultrasound probe with the CyberKnife system. Particularly, we analyze the effect of different robot poses on the plan quality.A method to detect the collision of beams with the robot or the transducer was developed and integrated into our treatment planning system. A safety margin accounts for beam motion and uncertainties. Using strict dose bounds and the objective to maximize target coverage, we generated a total of 7650 treatment plans for five different prostate cases. For each case, ten different poses of the ultrasound robot and transducer were considered. The effect of different sets of beam source positions and different motion margins ranging from 5 to 50 mm was analyzed.Compared to reference plans without the ultrasound robot, the coverage typically drops for all poses. Depending on the patient, the robot pose, and the motion margin, the reduction in coverage may be up to 50 % points. However, for all patient cases, there exist poses for which the loss in coverage was below 1 % point for motion margins of up to 20 mm. In general, there is a positive correlation between the number of treatment beams and the coverage. While the blocking of beam directions has a negative effect on the plan quality, the results indicate that a careful choice of the ultrasound robotś pose and a large solid angle covered by beam starting positions can offset this effect. Identifying robot poses that yield acceptable plan quality and allow for intra-fraction ultrasound image guidance, therefore, appears feasible.
                  BibTeX:
                  @article{Gerlach2017,
                    author = {S. Gerlach and I. Kuhlemann and P. Jauer and R. Bruder and F. Ernst and C. Fürweger and A. Schlaefer},
                    title = {Robotic ultrasound-guided SBRT of the prostate: feasibility with respect to plan quality},
                    journal = {Int J Comput Assist Radiol Surg. online first 1-11},
                    school = {Institute of Medical Technology, Hamburg University of Technology, Hamburg, Germany. schlaefer@tuhh.de.},
                    year = {2017},
                    volume = {12},
                    number = {1},
                    pages = {149-159},
                    url = {http://dx.doi.org/10.1007/s11548-016-1455-7}
                  }
                  
                • F. Griese, T. Knopp, R. Werner, A. Schlaefer, M. Möddel (2017), "Submillimeter-Accurate Marker Localization within Low Gradient Magnetic Particle Imaging Tomograms", International Journal on Magnetic Particle Imaging. Vol. 3(3),1703011. [Abstract] [BibTeX] [DOI] [URL]
                • Abstract: Magnetic Particle Imaging (MPI) achieves a high temporal resolution, which opens up a wide range of real-time medical applications such as device tracking and navigation. These applications usually rely on automated techniques for finding and localizing devices and fiducial markers in medical images. In this work, we show that submillimeter-accurate automatic marker localization from low gradient MPI tomograms with a spatial resolution of several millimeters is possible. Markers are initially identified within the tomograms by a thresholding-based segmentation algorithm. Subsequently, their positions are accurately determined by calculating the center of mass of the gray values inside the pre-segmented regions. A series of phantom measurements taken at full temporal resolution (46 Hz) is used to analyze statistical and systematical errors and to discuss the performance and stability of the automatic submillimeter-accurate marker localization algorithm
                  BibTeX:
                  @article{Griese2017,
                    author = {F. Griese and T. Knopp and R. Werner and A. Schlaefer and M. Möddel},
                    title = {Submillimeter-Accurate Marker Localization within Low Gradient Magnetic Particle Imaging Tomograms},
                    journal = {International Journal on Magnetic Particle Imaging},
                    year = {2017},
                    volume = {3},
                    number = {3},
                    pages = {1703011},
                    url = {http://hdl.handle.net/11420/8442},
                    doi = {10.15480/882.3244}
                  }
                  
                • T. Hansen, O. Rajput, L. Mattäus, A. Schlaefer (2017), "Evaluation of real-time trajectory planning for head motion compensation", In 16. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboter Assistierte Chirurgie. ,50-53. [BibTeX]
                • BibTeX:
                  @inproceedings{Hansen2017,
                    author = {T. Hansen and O. Rajput and L. Mattäus and A. Schlaefer},
                    title = {Evaluation of real-time trajectory planning for head motion compensation},
                    booktitle = {16. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboter Assistierte Chirurgie},
                    year = {2017},
                    pages = {50-53}
                  }
                  
                • C. Hatzfeld, S. Wismath, M. Hessinger, R. Werthschützky, A. Schlaefer, M. Kupnik (2017), "A miniaturized sensor for needle tip force measurements", Biomedical Engineering / Biomedizinische Technik., In BMTMedPhys 2017. Dresden, September, 2017. Vol. 62(s1),s109–s115. [BibTeX] [DOI]
                • BibTeX:
                  @article{Hatzfeld2017,
                    author = {C. Hatzfeld and S. Wismath and M. Hessinger and R. Werthschützky and A. Schlaefer and M. Kupnik},
                    title = {A miniaturized sensor for needle tip force measurements},
                    booktitle = {BMTMedPhys 2017},
                    journal = {Biomedical Engineering / Biomedizinische Technik},
                    year = {2017},
                    volume = {62},
                    number = {s1},
                    pages = {s109–s115},
                    doi = {10.1515/bmt-2017-5026}
                  }
                  
                • O. Ismail, O. Rajput, L. Matthäus, A. Schlaefer (2017), "Comparison of correspondence-free and correspondence-based hand-eye calibration", In 16. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboter Assistierte Chirurgie. ,6-10. [BibTeX]
                • BibTeX:
                  @inproceedings{Ismail2017,
                    author = {O. Ismail and O. Rajput and L. Matthäus and A. Schlaefer},
                    title = {Comparison of correspondence-free and correspondence-based hand-eye calibration},
                    booktitle = {16. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboter Assistierte Chirurgie},
                    year = {2017},
                    pages = {6-10}
                  }
                  
                • K. V. Laino, T. Saathoff, T. R. Savarimuthu, K. L. Schwaner, N. Gessert, A. Schlaefer (2017), "Design and implementation of a wireless instrument adapter", In 16. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboter Assistierte Chirurgie. ,31-36. [BibTeX]
                • BibTeX:
                  @inproceedings{Laino2017,
                    author = {K. V. Laino and T. Saathoff and T. R. Savarimuthu and K. Lindberg Schwaner and N. Gessert and A. Schlaefer},
                    title = {Design and implementation of a wireless instrument adapter},
                    booktitle = {16. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboter Assistierte Chirurgie},
                    year = {2017},
                    pages = {31-36}
                  }
                  
                • S. Latus, M. Lutz, T. Saathoff, N. Frey, A. Schlaefer (2017), "The quantitative effect of catheter bending on artery volume estimation for IVOCT", In Optics in Cardiology Symposium 2017. Rotterdam, Netherlands [BibTeX] [URL]
                • BibTeX:
                  @inproceedings{Latus2017a,
                    author = {S. Latus and M. Lutz and T. Saathoff and N. Frey and A. Schlaefer},
                    title = {The quantitative effect of catheter bending on artery volume estimation for IVOCT},
                    booktitle = {Optics in Cardiology Symposium 2017},
                    year = {2017},
                    url = {http://www.opticsincardiology.org/program/posters/}
                  }
                  
                • S. Latus, C. Otte, M. Schlüter, J. Rehra, K. Bizon, H. Schulz-Hildebrandt, T. Saathoff, G. Hüttmann, A. Schlaefer (2017), "An Approach for Needle Based Optical Coherence Elastography Measurements", Medical Image Computing and Computer-Assisted Intervention - MICCAI 2017. Cham ,655-663. Springer International Publishing. [Abstract] [BibTeX] [DOI] [URL]
                • Abstract: While navigation and interventional guidance are typically based on image data, the images do not necessarily reflect mechanical tissue properties. Optical coherence elastography (OCE) presents a modality with high sensitivity and very high spatial and temporal resolution. However, OCE has a limited field of view of only 2–5 mm depth. We present a side-facing needle probe to image externally induced shear waves from within soft tissue. A first method of quantitative needle-based OCE is provided. Using a time of flight setup, we establish the shear wave velocity and estimate the tissue elasticity. For comparison, an external scan head is used for imaging. Results for four different phantoms indicate a good agreement between the shear wave velocities estimated from the needle probe at different depths and the scan head. The velocities ranging from 0.9–3.4 m/s agree with the expected values, illustrating that tissue elasticity estimates from within needle probes are feasible.
                  BibTeX:
                  @article{Latus2017,
                    author = {S. Latus and C. Otte and M. Schlüter and J. Rehra and K. Bizon and H. Schulz-Hildebrandt and T. Saathoff and G. Hüttmann and A. Schlaefer},
                    title = {An Approach for Needle Based Optical Coherence Elastography Measurements},
                    journal = {Medical Image Computing and Computer-Assisted Intervention - MICCAI 2017},
                    publisher = {Springer International Publishing},
                    year = {2017},
                    pages = {655-663},
                    url = {https://doi.org/10.1007/978-3-319-66185-8_74},
                    doi = {10.1007/978-3-319-66185-8_74}
                  }
                  
                • S. Lehmann, S.-T. Antoni, A. Schlaefer, S. Schupp (2017), "Detection of Head Motion Artifacts based on a Statistical Online Model-Checking Approach", In Quantitative Evaluation of Systems - 14th International Conference. Berlin, Germany, September 5-7, 2017. [Abstract] [BibTeX]
                • Abstract: Many safety-critical applications in the medical domain, including dynamic tracking systems for real-world entities and motion control systems for cyber-physical devices, need to be checked continuously to facilitate a quick reaction to system failures and environmental changes. In this paper, we describe a combined solution of online model checking and the existing statistical model checking technique, building on a model representation of possible patient behaviours. We apply our concept in a case study on head motion tracking, for which we perform online motion pattern recognition and verification to decide on the most probable pattern on each time step as a base for countermeasures
                  BibTeX:
                  @inproceedings{Lehmann2017,
                    author = {S. Lehmann and S.-T. Antoni and A. Schlaefer and S. Schupp},
                    title = {Detection of Head Motion Artifacts based on a Statistical Online Model-Checking Approach},
                    booktitle = {Quantitative Evaluation of Systems - 14th International Conference},
                    year = {2017}
                  }
                  
                • O. Rajput, M. Schlüter, N. Gessert, T. R. Savarimuthu, C. Otte, S.-T. Antoni, A. Schlaefer (2017), "Robotic OCT Volume Acquisition Using a Single Fiber", In 16. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboter Assistierte Chirurgie. ,232-233. [BibTeX]
                • BibTeX:
                  @inproceedings{Rajput2017,
                    author = {O. Rajput and M. Schlüter and N. Gessert and T. R. Savarimuthu and C. Otte and S.-T. Antoni and A. Schlaefer},
                    title = {Robotic OCT Volume Acquisition Using a Single Fiber},
                    booktitle = {16. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboter Assistierte Chirurgie},
                    year = {2017},
                    pages = {232-233}
                  }
                  

                  2016

                  • J. Ackermann, C. Otte, G. Hüttmann, A. Schlaefer (2016), "Methods for Needle Motion Estimation from OCT Data", In 15. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboter Assistierte Chirurgie., September, 2016. ,208-213. [BibTeX]
                  • BibTeX:
                    @inproceedings{Ackermann2016,
                      author = {J. Ackermann and C. Otte and G. Hüttmann and A. Schlaefer},
                      title = {Methods for Needle Motion Estimation from OCT Data},
                      booktitle = {15. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboter Assistierte Chirurgie},
                      year = {2016},
                      pages = {208-213}
                    }
                    
                  • S.-T. Antoni, X. Ma, S. Schupp, A. Schlaefer (2016), "Reducing false discovery rates for on-line model-checking based detection of respiratory motion artifacts", In Gemeinsamer Tagungsband der Workshops der Tagung Software Engineering 2016 (SE 2016), Wien, Feb. 2016., February, 2016. ,182-186. [Abstract] [BibTeX]
                  • Abstract: Compensating respiratory motion in radiosurgery is an important problem and can lead to a more focused dose delivered to the patient. We previously showed the negative effect of respiratory artifacts on the error of the correlation model, connecting external and internal motion, for meaningful episodes from treatments with the Accuray CyberKnife(R). We applied on-line model checking, an iterative fail safety method, to respiratory motion. In this paper we vary its prediction parameter and decrease the previously rather high false discovery rate by 30.3%. In addition, we were able to increase the number of detected meaningful episodes through adaptive parameter choice by 452%.
                    BibTeX:
                    @inproceedings{Antoni2016,
                      author = {S.-T. Antoni and X. Ma and S. Schupp and A. Schlaefer},
                      title = {Reducing false discovery rates for on-line model-checking based detection of respiratory motion artifacts},
                      booktitle = {Gemeinsamer Tagungsband der Workshops der Tagung Software Engineering 2016 (SE 2016), Wien, Feb. 2016},
                      year = {2016},
                      pages = {182-186}
                    }
                    
                  • S.-T. Antoni, T. Saathoff, A. Schlaefer (2016), "On the effect of training for gesture control of a robotic microscope", In CURAC 2016 - Tagungsband. Bern, Switzerland ,298-299. [BibTeX]
                  • BibTeX:
                    @inproceedings{Antoni2016a,
                      author = {S.-T. Antoni and T. Saathoff and A. Schlaefer},
                      title = {On the effect of training for gesture control of a robotic microscope},
                      booktitle = {CURAC 2016 - Tagungsband},
                      year = {2016},
                      pages = {298-299}
                    }
                    
                  • J. Beringhoff, C. Otte, M. Schlüter, T. Saathoff, A. Schlaefer (2016), "Kontaktlose Schätzung der Interaktionskräfte chirurgischer Instrumente.", In 16. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboter Assistierte Chirurgie CURAC. ,191-196. [BibTeX]
                  • BibTeX:
                    @inproceedings{Beringhoff2016,
                      author = {J. Beringhoff and C. Otte and M. Schlüter and T. Saathoff and A. Schlaefer},
                      title = {Kontaktlose Schätzung der Interaktionskräfte chirurgischer Instrumente.},
                      booktitle = {16. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboter Assistierte Chirurgie CURAC},
                      year = {2016},
                      pages = {191-196}
                    }
                    
                  • A. Diercks A. Schlaefer (2016), "Features for predicting gait events using inertial measurement units", Gait & Posture., September, 2016. Vol. 49,256. [BibTeX] [DOI] [URL]
                  • BibTeX:
                    @article{Diercks2016,
                      author = {A. Diercks and A. Schlaefer},
                      title = {Features for predicting gait events using inertial measurement units},
                      journal = {Gait & Posture},
                      year = {2016},
                      volume = {49},
                      pages = {256},
                      url = {http://dx.doi.org/10.1016/j.gaitpost.2016.07.310},
                      doi = {10.1016/j.gaitpost.2016.07.310}
                    }
                    
                  • S. Gerlach, I. Kuhlemann, F. Ernst, C. Fürweger, A. Schlaefer (2016), "SU-G-JeP3-03: Effect of Robot Pose On Beam Blocking for Ultrasound Guided SBRT of the Prostate", Medical Physics. Vol. 43(6),3670-3671. [Abstract] [BibTeX] [DOI] [URL]
                  • Abstract: Purpose: Ultrasound presents a fast, volumetric image modality for real-time tracking of abdominal organ motion. How-ever, ultrasound transducer placement during radiation therapy is challenging. Recently, approaches using robotic arms for intra-treatment ultrasound imaging have been proposed. Good and reliable imaging requires placing the transducer close to the PTV. We studied the effect of a seven degrees of freedom robot on the fea-sible beam directions. Methods: For five CyberKnife prostate treatment plans we established viewports for the transducer, i.e., points on the patient surface with a soft tissue view towards the PTV. Choosing a feasible transducer pose and using the kinematic redundancy of the KUKA LBR iiwa robot, we considered three robot poses. Poses 1 to 3 had the elbow point anterior, superior, and inferior, respectively. For each pose and each beam starting point, the pro-jections of robot and PTV were computed. We added a 20 mm margin accounting for organ / beam motion. The number of nodes for which the PTV was partially of fully blocked were established. Moreover, the cumula-tive overlap for each of the poses and the minimum overlap over all poses were computed. Results: The fully and partially blocked nodes ranged from 12% to 20% and 13% to 27%, respectively. Typically, pose 3 caused the fewest blocked nodes. The cumulative overlap ranged from 19% to 29%. Taking the minimum overlap, i.e., considering moving the robot?s elbow while maintaining the transducer pose, the cumulative over-lap was reduced to 16% to 18% and was 3% to 6% lower than for the best individual pose. Conclusion: Our results indicate that it is possible to identify feasible ultrasound transducer poses and to use the kinematic redundancy of a 7 DOF robot to minimize the impact of the imaging subsystem on the feasible beam directions for ultrasound guided and motion compensated SBRT. Research partially funded by DFG grants ER 817/1-1 and SCHL 1844/3-1
                    BibTeX:
                    @article{Gerlach2016a,
                      author = {S. Gerlach and I. Kuhlemann and F. Ernst and C. Fürweger and A. Schlaefer},
                      title = {SU-G-JeP3-03: Effect of Robot Pose On Beam Blocking for Ultrasound Guided SBRT of the Prostate},
                      journal = {Medical Physics},
                      year = {2016},
                      volume = {43},
                      number = {6},
                      pages = {3670-3671},
                      url = {http://scitation.aip.org/content/aapm/journal/medphys/43/6/10.1118/1.4957068},
                      doi = {10.1118/1.4957068}
                    }
                    
                  • S. Gerlach, I. Kuhlemann, P. Jauer, R. Bruder, F. Ernst, C. Fürweger, A. Schlaefer (2016), "Feasibility of robotic ultrasound guided SBRT of the prostate", CARS 2016, 30th International Congress and Exhibition., In CARS 2016, 30th International Congress and Exhibition. Heidelberg [BibTeX]
                  • BibTeX:
                    @inproceedings{Gerlach2016,
                      author = {S. Gerlach and I. Kuhlemann and P. Jauer and R. Bruder and F. Ernst and C. Fürweger and A. Schlaefer},
                      title = {Feasibility of robotic ultrasound guided SBRT of the prostate},
                      booktitle = {CARS 2016, 30th International Congress and Exhibition},
                      journal = {CARS 2016, 30th International Congress and Exhibition},
                      year = {2016}
                    }
                    
                  • M. Hofmann, K. Bizon, A. S. T. Knopp (2016), "Subpixelgenaue Positionsbestimmung in Magnetic-Particle-Imaging", Bildverarbeitung für die Medizin 2016, Algorithmen – Systeme – Anwendungen., In Bildverarbeitung für die Medizin 2016 - Algorithmen - Systeme - Anwendungen., March, 2016. ,20-25. [Abstract] [BibTeX] [DOI] [URL]
                  • Abstract: Das tomographische Bildgebungsverfahren Magnetic- Particle-Imaging (MPI) bietet eine hohe zeitliche Auflösung im unteren Millisekundenbereich. Für die Navigation von markierten Kathetern ist die Ortsauflösung jedoch zu gering. In dieser Arbeit wird gezeigt, dass eine submillimetergenaue Positionsbestimmung möglich ist, obwohl die aufgenommenen Daten eine niedrigere Auflösung aufweisen. Hierzu werden die niedrig aufgelösten MPI-Daten aufbereitet und die Position einer Probe über den Schwerpunkt der Grauwerte bestimmt. Anhand von Messdaten werden statistische Fehler und systematische Abweichungen der Methode abgeschätzt
                    BibTeX:
                    @article{Hofmann2016,
                      author = {M. Hofmann and K. Bizon and A. Schlaefer T. Knopp},
                      title = {Subpixelgenaue Positionsbestimmung in Magnetic-Particle-Imaging},
                      booktitle = {Bildverarbeitung für die Medizin 2016 - Algorithmen - Systeme - Anwendungen},
                      journal = {Bildverarbeitung für die Medizin 2016, Algorithmen – Systeme – Anwendungen},
                      year = {2016},
                      pages = {20-25},
                      url = {http://dx.doi.org/10.1007/978-3-662-49465-3_6},
                      doi = {10.1007/978-3-662-49465-3_6}
                    }
                    
                  • S. Latus, M. Lutz, C. Otte, S.-T. Antoni, N. Frey, A. Schlaefer (2016), "Estimation of Arterial Vasomotion Using Intravascular Optical Coherence Tomography", 15. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboter Assistierte Chirurgie., June, 2016. [BibTeX]
                  • BibTeX:
                    @article{Latus2016a,
                      author = {S. Latus and M. Lutz and C. Otte and S.-T. Antoni and N. Frey and A. Schlaefer},
                      title = {Estimation of Arterial Vasomotion Using Intravascular Optical Coherence Tomography},
                      journal = {15. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboter Assistierte Chirurgie},
                      year = {2016}
                    }
                    
                  • S. Latus, M. Lutz, C. Otte, T. Saathoff, K. Schulz, N. Frey, A. Schlaefer (2016), "A setup for systematic evaluation and optimization of OCT imaging in the coronary arteries", In Proceedings, supplement of the International Journal of CARS'2016., May, 2016. Vol. 11(Suppl 1),175-176. [BibTeX] [DOI] [URL]
                  • BibTeX:
                    @inproceedings{Latus2016,
                      author = {S. Latus and M. Lutz and C. Otte and T. Saathoff and K. Schulz and N. Frey and A. Schlaefer},
                      title = {A setup for systematic evaluation and optimization of OCT imaging in the coronary arteries},
                      booktitle = {Proceedings, supplement of the International Journal of CARS'2016},
                      year = {2016},
                      volume = {11},
                      number = {Suppl 1},
                      pages = {175-176},
                      url = {http://dx.doi.org/10.1007/s11548-016-1412-5},
                      doi = {10.1007/s11548-016-1412-5}
                    }
                    
                  • C. Otte, J. Beringhoff, S. Latus, S.-T. Antoni, O. Rajput, A. Schlaefer (2016), "Towards Force Sensing Based on Instrument-Tissue Interaction", In 2016 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). Baden-Baden, September, 2016. ,180-185. [BibTeX] [URL]
                  • BibTeX:
                    @inproceedings{Otte2016,
                      author = {C. Otte and J. Beringhoff and S. Latus and S.-T. Antoni and O. Rajput and A. Schlaefer},
                      title = {Towards Force Sensing Based on Instrument-Tissue Interaction},
                      booktitle = {2016 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)},
                      year = {2016},
                      pages = {180-185},
                      url = {https://ras.papercept.net/conferences/conferences/MFI16/program/MFI16_ContentListWeb_2.html}
                    }
                    
                  • A. Patel, C. Otte, A. Schlaefer, D. Nir, S. Otte, T. Ngo, T. Loke, M. Winkler (2016), "MP34-14 Investigating the feasibility of optical coherence tomography to identify prostate cancer - an ex-vivo study", The Journal of Urology., In Presented at the Annual Meetig of American Urological Association (AUA)., May, 2016. Vol. 195(4),e476 - e477. [BibTeX] [DOI] [URL]
                  • BibTeX:
                    @article{Patel2016,
                      author = {A. Patel and C. Otte and A. Schlaefer and D. Nir and S. Otte and T. Ngo and T. Loke and M. Winkler},
                      title = {MP34-14 Investigating the feasibility of optical coherence tomography to identify prostate cancer - an ex-vivo study},
                      booktitle = {Presented at the Annual Meetig of American Urological Association (AUA)},
                      journal = {The Journal of Urology},
                      year = {2016},
                      volume = {195},
                      number = {4},
                      pages = {e476 - e477},
                      url = {http://www.jurology.com/article/S0022-5347(16)01860-7/abstract},
                      doi = {10.1016/j.juro.2016.02.1572}
                    }
                    
                  • S. Prevrhal, C. Spink, M. Grass, M. Bless, A. Schlaefer, H. Ittrich, M. Regier, G. Adam (2016), "ECR 2016 Book of Abstracts - B - Scientific Sessions and Clinical Trials in Radiology", Insights into Imaging., feb, 2016. Vol. 7(S1),162-465. Springer Science and Business Media LLC. [BibTeX] [DOI] [URL]
                  • BibTeX:
                    @article{Prevrhal2016,
                      author = {S. Prevrhal and C. Spink and M. Grass and M. Bless and A. Schlaefer and H. Ittrich and M. Regier and G. Adam},
                      title = {ECR 2016 Book of Abstracts - B - Scientific Sessions and Clinical Trials in Radiology},
                      journal = {Insights into Imaging},
                      publisher = {Springer Science and Business Media LLC},
                      year = {2016},
                      volume = {7},
                      number = {S1},
                      pages = {162--465},
                      note = {B-1353 14:40},
                      url = {https://doi.org/10.1007/s13244-016-0475-8},
                      doi = {10.1007/s13244-016-0475-8}
                    }
                    
                  • O. Rajput, S.-T. Antoni, C. Otte, T. Saathoff, L. Matthäus, A. Schlaefer (2016), "High Accuracy 3D Data Acquisition Using Co-Registered OCT and Kinect", In 2016 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems., September, 2016. [BibTeX] [URL]
                  • BibTeX:
                    @inproceedings{Rajput2016,
                      author = {O. Rajput and S.-T. Antoni and C. Otte and T. Saathoff and L. Matthäus and A. Schlaefer},
                      title = {High Accuracy 3D Data Acquisition Using Co-Registered OCT and Kinect},
                      booktitle = {2016 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems},
                      year = {2016},
                      url = {https://ras.papercept.net/conferences/conferences/MFI16/program/MFI16_ContentListWeb_2.html}
                    }
                    
                  • S.-T.Antoni, J. Rinast, X. Ma, S. Schupp, A. Schlaefer (2016), "Online model checking for monitoring surrogate-based respiratory motion tracking in radiation therapy", International Journal of Computer Assisted Radiology and Surgery., June, 2016. Vol. 11,2085-2096. Springer. [Abstract] [BibTeX] [DOI]
                  • Abstract: Objective: Correlation between internal and external motion is critical for respiratory motion compensation in radiosurgery. Artifacts like coughing, sneezing or yawning or changes in the breathing pattern can lead to misalignment between beam and tumor and need to be detected to interrupt the treatment. We propose online model checking (OMC), a model-based verification approach from the field of formal methods, to verify that the breathing motion is regular and the correlation holds. We demonstrate that OMC may be more suitable for artifact detection than the prediction error. Materials and methods: We established a sinusoidal model to apply OMC to the verification of respiratory motion. The method was parameterized to detect deviations from typical breathing motion. We analyzed the performance on synthetic data and on clinical episodes showing large correlation error. In comparison, we considered the prediction error of different state-of-the-art methods based on least mean squares (LMS; normalized LMS, nLMS; wavelet-based multiscale autoregression, wLMS), recursive least squares (RLSpred) and support vector regression (SVRpred).Results: On synthetic data, OMC outperformed wLMS by at least 30 % and SVRpred by at least 141 %, detecting 70 % of transitions. No artifacts were detected by nLMS and RLSpred. On patient data, OMC detected 23?49 % of the episodes correctly, outperforming nLMS, wLMS, RLSpred and SVRpred by up to 544, 491, 408 and 258 %, respectively. On selected episodes, OMC detected up to 94 % of all events. Conclusion: OMC is able to detect changes in breathing as well as artifacts which previously would have gone undetected, outperforming prediction error-based detection. Synthetic data analysis supports the assumption that prediction is very insensitive to specific changes in breathing. We suggest using OMC as an additional safety measure ensuring reliable and fast stopping of irradiation.
                    BibTeX:
                    @article{S.T.Antoni2016,
                      author = {S.-T.Antoni and J. Rinast and X. Ma and S. Schupp and A. Schlaefer},
                      title = {Online model checking for monitoring surrogate-based respiratory motion tracking in radiation therapy},
                      journal = {International Journal of Computer Assisted Radiology and Surgery},
                      publisher = {Springer},
                      year = {2016},
                      volume = {11},
                      pages = {2085-2096},
                      doi = {10.1007/s11548-016-1423-2}
                    }
                    
                  • M. Schlüter, N. Gdaniec, A. Schlaefer, T. Knopp (2016), "Compensation of Periodic Motion for Averaging of Magnetic Particle Imaging Data", In IEEE Nuclear Science Symposium, Medical Imaging Conference and Room-Temperature Semiconductor Detector Workshop. Straßburg, September, 2016. ,1-2. [Abstract] [BibTeX]
                  • Abstract: The temporal resolution of magnetic particle imaging (MPI) is sufficiently high to capture dynamic processes like cardiac motion. The achievable spatial resolution of MPI is closely linked to the signal-to-noise ratio of the measured voltage signal. Therefore, in practice it can be advantageous to improve the signal-to-noise ratio by block-wise averaging the signal over time. However, this will decrease the temporal resolution such that cardiac motion is not resolved anymore. In the present work, we introduce a framework for averaging MPI data that exhibit periodic motion induced by e.g. respiration and/or the heart beat. The frequency of motion is directly derived from the MPI raw data without the need for an additional navigator signal. The short time Fourier transform is used for this purpose, because each of these periodic movements will have a frequency varying over time. In order to average the captured frames corresponding to the same phase of the motion, one has to calculate virtual frames since the data acquisition and the periodic motion are not synchronized. In a phantom study it is shown that the developed method is capable of averaging experimental data without introducing any motion artifacts.
                    BibTeX:
                    @conference{Schlueter2016,
                      author = {M. Schlüter and N. Gdaniec and A. Schlaefer and T. Knopp},
                      title = {Compensation of Periodic Motion for Averaging of Magnetic Particle Imaging Data},
                      booktitle = {IEEE Nuclear Science Symposium, Medical Imaging Conference and Room-Temperature Semiconductor Detector Workshop},
                      year = {2016},
                      pages = {1-2}
                    }
                    

                    2015

                    • S.-T. Antoni, A. Dabrowski, D. Schetelig, B.-P. Diercks, R. Fliegert, R. Werner, I. Wolf, A. H. Guse, A. Schlaefer (2015), "Segmentation of T-cells in fluorescence microscopy", In In Proc. IEEE Engineering in Medicine and Biology Society (EMBC'15). Milan, Italy, August, 2015. [Abstract] [BibTeX] [URL]
                    • Abstract: In the adaptive immune system Calcium (Ca2+) is acting as a fundamental on-switch. Fluorescence microscopy is used to study the underlying mechanisms. However, living cells introduce motion, hence a precise segmentation is needed to gain knowledge about motion and deformation for subsequent analysis of (sub)cellular Ca2+ activity. We extend a segmentation algorithm and evaluate its performance using a novel scheme.
                      BibTeX:
                      @inproceedings{Antoni2015b,
                        author = {S.-T. Antoni and A. Dabrowski and D. Schetelig and B.-P. Diercks and R. Fliegert and R. Werner and I. Wolf and A. H. Guse and A. Schlaefer},
                        title = {Segmentation of T-cells in fluorescence microscopy},
                        booktitle = {In Proc. IEEE Engineering in Medicine and Biology Society (EMBC'15)},
                        year = {2015},
                        url = {http://emb.citengine.com/event/embc-2015/paper-details?pdID=6596}
                      }
                      
                    • S.-T. Antoni, C. Otte, O. Rajput, K. Schulz, A. Schlaefer (2015), "Combined Ultrasound and OCT Imaging for Robotic Needle Placement", IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'15)., In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'15). Hamburg, Germany, October, 2015. ,4737. [Abstract] [BibTeX]
                    • Abstract: The precise placement of needles is a crucial requirement for many medical procedures. We present a setup to enhance the safe and precise placement of needles with respect to a certain target structure by use of co-registered Doppler-OCT and Ultrasound imaging as well as an articulated robot. External ultrasound imaging and Doppler OCT measurements from within the needle showed similar results in estimating the tissue displacement during needle insertion.
                      BibTeX:
                      @conference{Antoni2015d,
                        author = {S.-T. Antoni and C. Otte and O. Rajput and K. Schulz and A. Schlaefer},
                        title = {Combined Ultrasound and OCT Imaging for Robotic Needle Placement},
                        booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'15)},
                        journal = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'15)},
                        year = {2015},
                        pages = {4737}
                      }
                      
                    • S.-T. Antoni, C. Otte, O. Rajput, K. Schulz, A. Schlaefer (2015), "Hybrid Needle Localization using Co-registered Ultrasound and OCT Imaging", IROS 2015 Workshop Proceedings: Navigation and Actuation of Flexible Instruments in Medical Application (NAFIMA)., In Workshop Proceedings: Navigation and Actuation of Flexible Instruments in Medical Application (NAFIMA, IROS workshop). Hamburg, Germany, October, 2015. ,24-25. [Abstract] [BibTeX]
                    • Abstract: The precise placement of needle during brachytherapy is very important for the course of the treatment. During this procedure, different organs must be spared, for instance the bladder. We present a setup to enhance the safe and precise placement of needles with the use of co-registered OCT and Ultrasound imaging as well as an articulated robot.
                      BibTeX:
                      @inproceedings{Antoni2015c,
                        author = {S.-T. Antoni and C. Otte and O. Rajput and K. Schulz and A. Schlaefer},
                        editor = {Jessica Burgner-Kahrs and Alexander Schlaefer},
                        title = {Hybrid Needle Localization using Co-registered Ultrasound and OCT Imaging},
                        booktitle = {Workshop Proceedings: Navigation and Actuation of Flexible Instruments in Medical Application (NAFIMA, IROS workshop)},
                        journal = {IROS 2015 Workshop Proceedings: Navigation and Actuation of Flexible Instruments in Medical Application (NAFIMA)},
                        year = {2015},
                        pages = {24-25}
                      }
                      
                    • S.-T. Antoni, R. Plagge, R. Dürichen, A. Schlaefer (2015), "Detecting Respiratory Artifacts from Video Data", Informatik aktuell., In Bildverarbeitung für die Medizin 2015 Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 15. bis 17. März 2015 in Lübeck., February, 2015. ,227-232. Springer Berlin Heidelberg. [Abstract] [BibTeX] [DOI] [URL]
                    • Abstract: Detecting artifacts in signals is an important problem in a wide number of research areas. In robotic radiotherapy motion prediction is used to overcome latencies in the setup, with robustness effected by the occurrence of artifacts. For motion prediction the detection and especially the definition of artifacts can be challenging. We study the detection of artifacts like, e.g., coughing, sneezing or yawning. Manual detection can be time consuming. To assist manual annotation, we introduce a method based on kernel density estimation to detect intervals of artifacts on video data. We evaluate our method on a small set of test subjects. With 86 intervals of artifacts found by our method we are able to identify all 70 intervals derived from manual detection. Our results indicate a more exact choice of intervals and the identification of subtle artifacts like swallowing, that where missed in the manual detection.
                      BibTeX:
                      @inproceedings{Antoni2015f,
                        author = {S.-T. Antoni and R. Plagge and R. Dürichen and A. Schlaefer},
                        editor = {Handels, Heinz and Deserno, Thomas Martin and Meinzer, Hans-Peter and Tolxdorff, Thomas},
                        title = {Detecting Respiratory Artifacts from Video Data},
                        booktitle = {Bildverarbeitung für die Medizin 2015 Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 15. bis 17. März 2015 in Lübeck},
                        journal = {Informatik aktuell},
                        publisher = {Springer Berlin Heidelberg},
                        year = {2015},
                        pages = {227-232},
                        url = {http://link.springer.com/chapter/10.1007%2F978-3-662-46224-9_40},
                        doi = {10.1007/978-3-662-46224-9_40}
                      }
                      
                    • S.-T. Antoni, J. Rinast, S. Schupp, A. Schlaefer (2015), "Comparing Model-free Motion Prediction and On-line Model Checking for Respiratory Motion Management", Software Engineering (Workshops) 2015., In Gemeinsamer Tagungsband der Workshops der Tagung Software Engineering. Dresden, Germany ,15-18. [Abstract] [BibTeX] [URL]
                    • Abstract: Compensating for respiratory motion is a key challenge for stereotactic body radiation therapy. To overcome latencies in the systems, prediction of future motion is necessary. This is related to the assumption of a stable correlation between external and internal motion. We present a new application for on-line model checking to introduce fail-safety to respiratory motion prediction and show its relevance by comparing to the widely used nLMS predictor. We demonstrate that the regularity of the external motion can be modeled and tested using OMC and deviations from regular respiratory motion can be detected.
                      BibTeX:
                      @inproceedings{Antoni2015,
                        author = {S.-T. Antoni and J. Rinast and S. Schupp and A. Schlaefer},
                        title = {Comparing Model-free Motion Prediction and On-line Model Checking for Respiratory Motion Management},
                        booktitle = {Gemeinsamer Tagungsband der Workshops der Tagung Software Engineering},
                        journal = {Software Engineering (Workshops) 2015},
                        year = {2015},
                        pages = {15-18},
                        url = {http://ceur-ws.org/Vol-1337/paper4.pdf}
                      }
                      
                    • S.-T. Antoni, J. Rinast, S. Schupp, A. Schlaefer (2015), "Evaluation des Einflusses von Artefakten auf den Korrelationsfehler in der bewegungskompensierten Radiochirurgie", CURAC 2015., In Tagungsband der 14. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboterassistierte Chirurgie. Bremen, Germany ,133-138. [Abstract] [BibTeX]
                    • Abstract: Die Kompensation von Atembewegung in der Radiochirurgie ist ein wichtiges Problem. Durch Atembewegungsausgleich kann die Strahlung besser fokussiert und so die Gesamtdosis für den Patienten verringert werden. Bisher ist es nicht möglich, die Position des Tumors während der Behandlung kontinuierlich zu bestimmen ohne den Patienten einer deutlich erhöhten Strahlungsdosis auszusetzen. Stattdessen wird die Atmung extern gemessen. Eine wichtige Voraussetzung für die Kompensation der Atembewegung ist daher die Bestimmung eines Korrelationsmodells, dass aus den externen Atmungsbewegungsdaten die Position des Tumors schätzt. Bisher ist nur wenig darüber bekannt, inwiefern Irregularitäten in der Atmung den Fehler des Korrelationsmodells beeinflussen. Wir wenden On-line Model Checking, ein iteratives Verfahren aus der Ausfallsicherheit, auf 194 mit dem CyberKnife(R) System aufgenommene Behandlungsdatensätze an, um Artefakte in diesen zu erkennen. Für physiologisch sinnvoll definierte Episoden können wir zeigen, dass Artefakte den Korrelationsfehler negativ beeinflussen können.
                      BibTeX:
                      @inproceedings{Antoni2015a,
                        author = {S.-T. Antoni and J. Rinast and S. Schupp and A. Schlaefer},
                        title = {Evaluation des Einflusses von Artefakten auf den Korrelationsfehler in der bewegungskompensierten Radiochirurgie},
                        booktitle = {Tagungsband der 14. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboterassistierte Chirurgie},
                        journal = {CURAC 2015},
                        year = {2015},
                        pages = {133-138}
                      }
                      
                    • S.-T. Antoni, C. Sonnenburg, T. Saathoff, A. Schlaefer (2015), "Feasibility of interactive gesture control of a robotic microscope", Current Directions in Biomedical Engineering., September, 2015. Vol. 1(1),164-167. [Abstract] [BibTeX] [DOI] [URL]
                    • Abstract: Robotic devices become increasingly available in the clinics. One example are motorized surgical microscopes. While there are different scenarios on how to use the devices for autonomous tasks, simple and reliable interaction with the device is a key for acceptance by surgeons. We study, how gesture tracking can be integrated within the setup of a robotic microscope. In our setup, a Leap Motion Controller is used to track hand motion and adjust the field of view accordingly. We demonstrate with a survey that moving the field of view over a specified course is possible even for untrained subjects. Our results indicate that touch-less interaction with robots carrying small, near field gesture sensors is feasible and can be of use in clinical scenarios, where robotic devices are used in direct proximity of patient and physicians.
                      BibTeX:
                      @article{Antoni2015e,
                        author = {S.-T. Antoni and C. Sonnenburg and T. Saathoff and A. Schlaefer},
                        title = {Feasibility of interactive gesture control of a robotic microscope},
                        journal = {Current Directions in Biomedical Engineering},
                        year = {2015},
                        volume = {1},
                        number = {1},
                        pages = {164-167},
                        note = {ISSN: 2364-5504},
                        url = {http://www.degruyter.com/dg/viewarticle.fullcontentlink:pdfeventlink/002fj002fcdbme.2015.1.issue-1002fcdbme-2015-0041002fcdbme-2015-0041.pdf?t:ac=j002fcdbme.2015.1.issue-1002fcdbme-2015-0041$002fcdbme-2015-0041.xml},
                        doi = {10.1515/cdbme-2015-0041}
                      }
                      
                    • D. Düwel, C. Otte, K. Schultz, T. Saathoff, A. Schlaefer (2015), "Towards contactless optical coherence elastography with acoustic tissue excitation", Current Directions in Biomedical Engineering., September, 2015. Vol. 1(1),215-219. De Gruyter. [Abstract] [BibTeX] [DOI]
                    • Abstract: Elastography presents an interesting approach to complement image data with mechanical tissue properties. Typically, the tissue is excited by direct contact to a probe. We study contactless elastography based on optical coherence tomography (OCT) and dynamic acoustic tissue excitation with airborne sound. We illustrate the principle and an implementation using sound waves of 135 Hz to excite the tissue. The displacement is measured and results of several tests indicate the feasibility to obtain a qualitative measure of the mechanical tissue properties. The approach is interesting for optical palpation, e.g., to enhance navigation and tissue characterization in minimally invasive and robot-assisted surgery
                      BibTeX:
                      @article{Duewel2015,
                        author = {D. Düwel and C. Otte and K. Schultz and T. Saathoff and A. Schlaefer},
                        editor = {O. Dössel},
                        title = {Towards contactless optical coherence elastography with acoustic tissue excitation},
                        journal = {Current Directions in Biomedical Engineering},
                        publisher = {De Gruyter},
                        year = {2015},
                        volume = {1},
                        number = {1},
                        pages = {215-219},
                        doi = {10.1515/CDBME-2015-0054}
                      }
                      
                    • J. Hagenah, M. Scharfschwerdt, A. Schlaefer, C. Metzner (2015), "A machine learning approach for planning valve-sparing aortic root reconstruction", Current Directions in Biomedical Engineering., September, 2015. Vol. 1(1),361-365. [Abstract] [BibTeX] [DOI] [URL]
                    • Abstract: Choosing the optimal prosthesis size and shape is a difficult task during surgical valve-sparing aortic root reconstruction. Hence, there is a need for surgery planning tools. Common surgery planning approaches try to model the mechanical behaviour of the aortic valve and its leaflets. However, these approaches suffer from inaccuracies due to unknown biomechanical properties and from a high computational complexity. In this paper, we present a new approach based on machine learning that avoids these problems. The valve geometry is described by geometrical features obtained from ultrasound images. We interpret the surgery planning as a learning problem, in which the features of the healthy valve are predicted from these of the dilated valve using support vector regression (SVR). Our first results indicate that a machine learning based surgery planning can be possible.
                      BibTeX:
                      @article{Hagenah2015,
                        author = {J. Hagenah and M. Scharfschwerdt and A. Schlaefer and C. Metzner},
                        title = {A machine learning approach for planning valve-sparing aortic root reconstruction},
                        journal = {Current Directions in Biomedical Engineering},
                        year = {2015},
                        volume = {1},
                        number = {1},
                        pages = {361-365},
                        note = {ISSN 2364-5504},
                        url = {http://www.degruyter.com/dg/viewarticle.fullcontentlink:pdfeventlink/002fj002fcdbme.2015.1.issue-1002fcdbme-2015-0089002fcdbme-2015-0089.pdf?result=3&rskey=AEqZpl&t:ac=j002fcdbme.2015.1.issue-1002fcdbme-2015-0089$002fcdbme-2015-0089.xml},
                        doi = {10.1515/cdbme-2015-0089}
                      }
                      
                    • B. Hollmach, K. Schulz, S. Soltau, T. Saathoff, A. Schlaefer (2015), "Feasibility of Robotic Ultrasound Palpation", In 14. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboterassistierte Chirurgie, September 17-19, 2015. Bremen, Germany, September, 2015. ,125-129. [Abstract] [BibTeX]
                    • Abstract: Ultrasound elastography presents an interesting method for measuring differences in tissue stiffness inside a patient. Conventionally, palpation plays an important role in the examination of lesions. However, elastography measurements are typically displayed as images, i.e., the haptic feedback is lost. We describe a system to realize robotic ultrasound elastography and haptic feedback of differences in tissue stiffness. We also report results for initial phantom experiments which indicate that robotic ultrasound palpation is feasible
                      BibTeX:
                      @inproceedings{Hollmach2015,
                        author = {B. Hollmach and K. Schulz and S. Soltau and T. Saathoff and A. Schlaefer},
                        title = {Feasibility of Robotic Ultrasound Palpation},
                        booktitle = {14. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboterassistierte Chirurgie, September 17-19, 2015},
                        year = {2015},
                        pages = {125-129}
                      }
                      
                    • M. Neidhardt, O. Rajput, D. Drömann, A. Schlaefer (2015), "Mobile C-arm Deformation and its implication on Stereoscopic Localization", Tagungsband der 14. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboterassistierte Chirurgie CURAC'15., In Tagungsband der 14. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboterassistierte Chirurgie. Bremen, Germany, September, 2015. Vol. 1,183-187. [Abstract] [BibTeX]
                    • Abstract: Accurate localization in minimally invasive procedures is challenging, particularly in the presence of nonideal imaging systems. Mobile C-arms present a widely used tool for image guidance, including stereoscopic localization of tools, e.g., during bronchoscopy. However, the localization accuracy is susceptible to non-idealities like gravitational deformation of the C-arm gantry. We present a simulation study quantifying the effects of the deformation on two different approaches, namely, external tracking of the gantry pose and marker based pose estimation from within the X-ray images. A finite element model for a typical C-arm geometry is used to estimate deformations and their effect on the localization error is determined. Results show possible offsets between the C-arm source and detector position of up to 12 mm and a detector rotation of 1°. Furthermore, we demonstrate that localization based on the X-ray images is superior to external tracking of the gantry, with a target localization error of (0.67 ± 0.25) mm and (4.29 ± 0.69) mm, respectively
                      BibTeX:
                      @conference{Neidhardt2015,
                        author = {M. Neidhardt and O. Rajput and D. Drömann and A. Schlaefer},
                        title = {Mobile C-arm Deformation and its implication on Stereoscopic Localization},
                        booktitle = {Tagungsband der 14. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboterassistierte Chirurgie},
                        journal = {Tagungsband der 14. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboterassistierte Chirurgie CURAC'15},
                        year = {2015},
                        volume = {1},
                        pages = {183-187}
                      }
                      
                    • C. Otte, A. Patel, A. Schlaefer, S. Otte, T. Loke, T. Ngo, D. Nir, M. Winkler (2015), "Confronting the challenge of "virtual" prostate biopsy", 8th International Symposium on Focal Therapy and Imaging in Prostate and Kidney Cancer. [Abstract] [BibTeX] [URL]
                    • Abstract: Introduction The current workflow of prostate biopsy is in need of improvement. Optical Coherence Tomography (OCT) has emerged as a promising technology capable of providing a virtual tissue analysis in real time. We explored the technological feasibility of OCT in combination with computerised interpretation of optical signals and application of Machine-Learning algorithms for in-vivo tissue diagnosis. In this ?proof of concept? study we report the results of OCT imaging of fresh ex-vivo prostate tissue and signal processing, to identify cancer without the need for biopsy core processing. Methods OCT scans were obtained from 24 patients who underwent radical prostatectomy. Immediately after prostatectomy two postero-lateral tissue strips of approximately 15mm x 8mm x 6mm were prepared and coloured for orientation. Each strip was scanned twice from the capsular (outside) and the excision (inner) surface with an OCT microscope (EX1301, Vivosight Ltd.). Scan resolution was 4 x 4 x 50 microns. The EX1301 beam?s penetration depth is 2mm. A Bidirectional Dynamic Cortex Memory Network was trained and tested on randomly chosen samples of OCT A-scan data. Mean classification rate and standard deviation were calculated for 10 cycles of training/testing. Routine histopathology analysis was used as the reference standard. Results Of 46 strips, 24 were found to contain prostate cancer and 22 benign tissue on histopathological evaluation. Applying mathematical feature extraction to OCT signals acquired from the excision (inner) surface of the strips we could differentiate cancer from benign tissue. The mean classification rates archived for the test and training sets were 67.65% (0.70%) and 69.20% (1.49%), respectively. Conclusion The application of machine-learning techniques to OCT data sets, which were obtained from ex-vivo prostate tissue, provides encouraging results and highlights the potential for a ?virtual? biopsy approach. Further optimization and in-vivo application of this technique is in progress.
                      BibTeX:
                      @conference{Otte2015,
                        author = {C. Otte and A. Patel and A. Schlaefer and S. Otte and T. Loke and T. Ngo and D. Nir and M. Winkler},
                        title = {Confronting the challenge of "virtual" prostate biopsy},
                        journal = {8th International Symposium on Focal Therapy and Imaging in Prostate and Kidney Cancer},
                        year = {2015},
                        url = {http://www.erasmus.gr/microsites/1044/e-poster-catalogue}
                      }
                      
                    • K. Schulz, C. Otte, G. Hüttmann, A. Schlaefer (2015), "A Concept for Fail Safe Robotic Needle Insertion in Soft Tissue", Gemeinsamer Tagungsband der Workshops der Tagung Software Engineering. Dresden, Germany Vol. 1337,7-10. [Abstract] [BibTeX] [URL]
                    • Abstract: This paper presents a concept of automatic needle placement for brachytherapy. For the success of this minimally invasive treatment, a precise and safe placement of needles inside soft tissue is fundamental. The presented concept incorporates information about the needle as well as the tissue to find a suitable needle trajectory. A patientspecific tissue model is derived from different imaging modalities and updated during the insertion. Essential for this concept is that during the robotic placement of the needle, it is continuously verified if proceeding is safe
                      BibTeX:
                      @conference{Schulz2015,
                        author = {K. Schulz and C. Otte and G. Hüttmann and A. Schlaefer},
                        title = {A Concept for Fail Safe Robotic Needle Insertion in Soft Tissue},
                        journal = {Gemeinsamer Tagungsband der Workshops der Tagung Software Engineering},
                        year = {2015},
                        volume = {1337},
                        pages = {7-10},
                        url = {http://ceur-ws.org/Vol-1337/}
                      }
                      
                    • N. Stein, T. Saathoff, S.-T. Antoni, A. Schlaefer (2015), "Creating 3D gelatin phantoms for experimental evaluation in biomedicine", Current Directions in Biomedical Engineering., September, 2015. Vol. 1(1),331-334. [Abstract] [BibTeX] [DOI] [URL]
                    • Abstract: We describe and evaluate a setup to create gelatin phantoms by robotic 3D printing. Key aspects are the large workspace, reproducibility and resolution of the created phantoms. Given its soft tissue nature, the gelatin is kept fluid during inside the system and we present parameters for additive printing of homogeneous, solid objects. The results indicate that 3D printing of gelatin can be an alternative for quickly creating larger soft tissue phantoms without the need for casting a mold.
                      BibTeX:
                      @article{Stein2015,
                        author = {N. Stein and T. Saathoff and S.-T. Antoni and A. Schlaefer},
                        title = {Creating 3D gelatin phantoms for experimental evaluation in biomedicine},
                        journal = {Current Directions in Biomedical Engineering},
                        year = {2015},
                        volume = {1},
                        number = {1},
                        pages = {331-334},
                        note = {ISSN 2364-5504},
                        url = {http://www.degruyter.com/dg/viewarticle.fullcontentlink:pdfeventlink/002fj002fcdbme.2015.1.issue-1002fcdbme-2015-0082002fcdbme-2015-0082.pdf?t:ac=j002fcdbme.2015.1.issue-1002fcdbme-2015-0082$002fcdbme-2015-0082.xml},
                        doi = {10.1515/cdbme-2015-0082}
                      }
                      
                    • A. Tack, Y. Kobayashi, T. Gauer, A. Schlaefer, R. Werner (2015), "Groupwise Registration for Robust Motion Field Estimation in Artifact-Affected 4D CT Images", Workshop on Imaging and Computer Assistance in Radiation Therapy, MICCAI 2015., In Workshop on Imaging and Computer Assistance in Radiation Therapy, MICCAI 2015. [Abstract] [BibTeX]
                    • Abstract: Precise voxel trajectory estimation in 4D CT images is a prerequisite for reliable dose accumulation during 4D treatment planning. 4D CT image data is, however, often affected by motion artifacts and applying standard pairwise registration to such data sets bears the risk of aligning anatomical structures to artifacts – with physiologically unrealistic trajectories being the consequence. In this work, the potential of a novel non-linear hybrid intensity- and feature-based groupwise registration method for robust motion field estimation in artifact-affected 4D CT image data is investigated. The overall registration performance is evaluated on the DIR-lab datasets; Its robustness if applied to artifact-affected data sets is analyzed using clinically acquired data sets with and without artifacts. The proposed registration approach achieves an accuracy comparable to the state-of-the-art (subvoxel accuracy), but smoother voxel trajectories compared to pairwise registration. Even more important: it maintained accuracy and trajectory smoothness in the presence of image artifacts – in contrast to standard pairwise registration, which yields higher landmark-based registration errors and a loss of trajectory smoothness when applied to artifact-affected data sets
                      BibTeX:
                      @article{Tack2015,
                        author = {A. Tack and Y. Kobayashi and T. Gauer and A. Schlaefer and R. Werner},
                        title = {Groupwise Registration for Robust Motion Field Estimation in Artifact-Affected 4D CT Images},
                        booktitle = {Workshop on Imaging and Computer Assistance in Radiation Therapy, MICCAI 2015},
                        journal = {Workshop on Imaging and Computer Assistance in Radiation Therapy, MICCAI 2015},
                        year = {2015}
                      }
                      
                    • R. Werner, D. Schetelig, D. Säring, S.-T. Antoni, A. Dabrowski, B.-P. Diercks, R. Fliegert, A. Guse, A. Schlaefer, I. Wolf (2015), "Analysis of initial subcellular Ca2+ signals in fluorescence microscopy data from the perspective of image and signal processing", In 49th annual conference of the German Society for Biomedical Engineering (BMT'15). Lübeck, Germany, September, 2015. [Abstract] [BibTeX]
                    • Abstract: Calcium (Ca2+) signalling is essential for T cell activation, the on-switch for the adaptive immune system. It is assumed to start by localized short-lived initial Ca2+ signals - which, yet, have not been characterized. Initial signal formation can be examined by fluorescence microscopy but requires imaging with temporal and spatial resolutions being as high as possible. This, in turn, poses challenges from the perspective of image and signal processing, which will be discussed on the basis of our current workflow [1]. Primary and Jurkat T cells were loaded with two dyes (Fluo-4, FuraRed), stimulated by anti-CD3 or anti-CD3/CD28-coated beads, and imaged by ratiometric fluorescence microscopy (acquisition velocity up to 48fps; nominal spatial resolution 368nm). Different strategies for deconvolution and bleaching correction and their influence on quantitative measures for local Ca2+ activity were evaluated; approaches for SNR estimation and noise filtering were compared; cell shape/orientation normalization for cell population based Ca2+ signal analysis was proposed and applied. Deconvolution and bleaching correction techniques significantly influence quantitative Ca2+ dynamics measures on a single cell level. Lucy-Richardson deconvolution combined with fit-based additive bleaching correction is assumed to be an acceptable trade-off between computation time and image quality. Optimal noise filtering is, however, an open issue. Applied techniques range from moving averaging to more sophisticated low-pass filtering. Cell shape/orientation normalization, population-based Ca2+ dynamics analysis and the introduction of specific Ca2+ activity/responsiveness measures mitigates to some degree against the influence of specific post-processing block implementations. Fluorescence microscopy imaging of subcellular Ca2+ signals with a spatial resolution close to Abbeś resolution limit and rapid image acquisition is possible, but efficient processing of resulting large data sets and especially handling the trade-off between SNR and temporal resolution remains challenging. [1] D Schetelig et al. Proc BVM 401-6 (2015); Funded by Forschungszentrum Medizintechnik Hamburg, DFG (GU360/15-1) and Landesforschungsfoerderung Hamburg.
                      BibTeX:
                      @inproceedings{Werner2015,
                        author = {R. Werner and D. Schetelig and D. Säring and S.-T. Antoni and A. Dabrowski and B.-P. Diercks and R. Fliegert and A. Guse and A. Schlaefer and I. Wolf},
                        title = {Analysis of initial subcellular Ca2+ signals in fluorescence microscopy data from the perspective of image and signal processing},
                        booktitle = {49th annual conference of the German Society for Biomedical Engineering (BMT'15)},
                        year = {2015}
                      }
                      

                      2014

                      • R. Dürichen, T. Wissel, F. Ernst, A. Schlaefer, A. Schweikard (2014), "Multivariate respiratory motion prediction", Phys Med Biol. Vol. 59(20),6043-6060. [BibTeX] [DOI]
                      • BibTeX:
                        @article{Duerichen2014,
                          author = {R. Dürichen and T. Wissel and F. Ernst and A. Schlaefer and A. Schweikard},
                          title = {Multivariate respiratory motion prediction},
                          journal = {Phys Med Biol},
                          year = {2014},
                          volume = {59},
                          number = {20},
                          pages = {6043-6060},
                          doi = {10.1088/0031-9155/59/20/6043}
                        }
                        
                      • N. Lessmann, D. Drömann, A. Schlaefer (2014), "Feasibility of respiratory motion-compensated stereoscopic X-ray tracking for bronchoscopy", Int J Comput Assist Radiol Surg. Vol. 9(2),199-209. [BibTeX] [DOI]
                      • BibTeX:
                        @article{Lessmann2014,
                          author = {N. Lessmann and D. Drömann and A. Schlaefer,},
                          title = {Feasibility of respiratory motion-compensated stereoscopic X-ray tracking for bronchoscopy},
                          journal = {Int J Comput Assist Radiol Surg},
                          year = {2014},
                          volume = {9},
                          number = {2},
                          pages = {199-209},
                          doi = {10.1007/s11548-013-0920-9}
                        }
                        
                      • C. Otte, S. Otte, L. Wittig, G. Hüttmann, C. Kugler, D. Drömann, A. Zell, A. Schlaefer (2014), "Investigating Recurrent Neural Networks for OCT A-scan Based Tissue Analysis", Methods Inf Med.. Thesis at: C. Otte, TU Hamburg-Harburg, Schwarzenbergstr. 95 E, room 3.088, 21073 Hamburg, Germany.., Jul, 2014. Vol. 53(4),245-249. [Abstract] [BibTeX] [DOI]
                      • Abstract: Objectives: Optical Coherence Tomography (OCT) has been proposed as a high resolution image modality to guide transbronchial biopsies. In this study we address the question, whether individual A-scans obtained in needle direction can contribute to the identification of pulmonary nodules. Methods: OCT A-scans from freshly resected human lung tissue specimen were recorded through a customized needle with an embedded optical fiber. Bidirectional Long Short Term Memory networks (BLSTMs) were trained on randomly distributed training and test sets of the acquired A-scans. Patient specific training and different pre-processing steps were evaluated. Results: Classification rates from 67.5% up to 76% were archived for different training scenarios. Sensitivity and specificity were highest for a patient specific training with 0.87 and 0.85. Low pass filtering decreased the accuracy from 73.2% on a reference distribution to 62.2% for higher cutoff frequencies and to 56% for lower cutoff frequencies. Conclusion: The results indicate that a grey value based classification is feasible and may provide additional information for diagnosis and navigation. Furthermore, the experiments show patient specific signal properties and indicate that the lower and upper parts of the frequency spectrum contribute to the classification.
                        BibTeX:
                        @article{Otte2014,
                          author = {C. Otte and S. Otte and L. Wittig and G. Hüttmann and C. Kugler and D. Drömann and A. Zell and A. Schlaefer},
                          title = {Investigating Recurrent Neural Networks for OCT A-scan Based Tissue Analysis},
                          journal = {Methods Inf Med},
                          school = {C. Otte, TU Hamburg-Harburg, Schwarzenbergstr. 95 E, room 3.088, 21073 Hamburg, Germany.},
                          year = {2014},
                          volume = {53},
                          number = {4},
                          pages = {245-249},
                          doi = {10.3414/ME13-01-0135}
                        }
                        
                      • O. Shahin, A. Beširevic, M. Kleemann, A. Schlaefer (2014), "Ultrasound-based tumor movement compensation during navigated laparoscopic liver interventions", Surg Endosc. Vol. 28(5),1734-1741. [BibTeX] [DOI]
                      • BibTeX:
                        @article{Shahin2014,
                          author = {O. Shahin and A. Beširevic and M. Kleemann and A. Schlaefer},
                          title = {Ultrasound-based tumor movement compensation during navigated laparoscopic liver interventions},
                          journal = {Surg Endosc},
                          year = {2014},
                          volume = {28},
                          number = {5},
                          pages = {1734-1741},
                          doi = {10.1007/s00464-013-3374-9}
                        }
                        
                      • T. Viulet, O. Blanck, A. Schlaefer (2014), "SU-E-T-258: Parallel Optimization of Beam Configurations for CyberKnife Treatments", Medical physics. Vol. 41,283. [Abstract] [BibTeX] [DOI]
                      • Abstract: Purpose: The CyberKnife delivers a large number of beams originating at different non-planar positions and with different orientation. We study how much the quality of treatment plans depends on the beams considered during plan optimization. Particularly, we evaluate a new approach to search for optimal treatment plans in parallel by running optimization steps concurrently. Methods: So far, no deterministic, complete and efficient method to select the optimal beam configuration for robotic SRS/SBRT is known. Considering a large candidate beam set increases the likelihood to achieve a good plan, but the optimization problem becomes large and impractical to solve. We have implemented an approach that parallelizes the search by solving multiple linear programming problems concurrently while iteratively resampling zero weighted beams. Each optimization problem contains the same set of constraints but different variables representing candidate beams. The search is synchronized by sharing the resulting basis variables among the parallel optimizations. We demonstrate the utility of the approach based on an actual spinal case with the objective to improve the coverage. Results: The objective function is falling and reaches a value of 5000 after 49, 31, 25 and 15 iterations for 1, 2, 4, and 8 parallel processes. This corresponds to approximately 97% coverage in 77%, 59%, and 36% of the mean number of iterations with one process for 2, 4, and 8 parallel processes, respectively. Overall, coverage increases from approximately 91.5% to approximately 98.5%. Conclusion: While on our current computer with uniform memory access the reduced number of iterations does not translate into a similar speedup, the approach illustrates how to effectively parallelize the search for the optimal beam configuration. The experimental results also indicate that for complex geometries the beam selection is critical for further plan optimization.
                        BibTeX:
                        @article{Viulet2014a,
                          author = {T. Viulet and O. Blanck and A. Schlaefer},
                          title = {SU-E-T-258: Parallel Optimization of Beam Configurations for CyberKnife Treatments},
                          journal = {Medical physics},
                          year = {2014},
                          volume = {41},
                          pages = {283},
                          note = {Medical physics [0094-2405] Viulet, T J.:2014 Bd.:41 iss:6 S.:283 -283},
                          doi = {10.1118/1.4888589}
                        }
                        
                      • T. Viulet, O. Blanck, A. Schlaefer (2014), "SU-E-T-42: Analysis of Spatial Trade-Offs for a Spinal SRS Case", Medical physics. Vol. 41,231. [BibTeX] [DOI]
                      • BibTeX:
                        @article{Viulet2014,
                          author = {T. Viulet and O. Blanck and A. Schlaefer},
                          title = {SU-E-T-42: Analysis of Spatial Trade-Offs for a Spinal SRS Case},
                          journal = {Medical physics},
                          year = {2014},
                          volume = {41},
                          pages = {231},
                          note = {Medical physics [0094-2405] Viulet, T J.:2014 Bd.:41 iss:6 S.:231 -231},
                          doi = {10.1118/1.4888372}
                        }
                        
                      • B. Wang, A. Schlaefer, Z. Zhang (2014), "An optical approach to validate ultrasound surface segmentation of the heart", SPIE Proceedings. Vol. 9230 [Abstract] [BibTeX] [DOI]
                      • Abstract: The patient specific geometry of the heart is of interest for a number of diagnostic methods, e.g., when modeling the inverse electrocardiography (ECG) problem. One approach to get images of the heart is three-dimensional ultrasound. However, segmentation of the surface is complicated and segmentation methods are typically validated against manually drawn contours. This requires considerable expert knowledge. Hence, we have developed a setup that allows studying the accuracy of image segmentation from cardiac ultrasound. Using an optical tracking system, we have measured the three-dimensional surface of an isolated porcine heart. We studied whether the actual geometry can be reconstructed from both optical and ultrasound images. We illustrate the use of our approach in quantifying the segmentation result for a three-dimensional region-based active contour algorithm. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only
                        BibTeX:
                        @article{Wang2014,
                          author = {B. Wang and A. Schlaefer and Z. Zhang},
                          title = {An optical approach to validate ultrasound surface segmentation of the heart},
                          journal = {SPIE Proceedings},
                          year = {2014},
                          volume = {9230},
                          note = {Twelfth International Conference on Photonics and Imaging in Biology and Medicine},
                          doi = {10.1117/12.2069023}
                        }
                        
                      • L. Wittig, C. Otte, S. Otte, G. Hüttmann, D. Drömann, A. Schlaefer (2014), "Tissue analysis of solitary pulmonary nodules using OCT A-Scan imaging needle probe", The European respiratory journal. Vol. 44(58),4979. [BibTeX] [URL]
                      • BibTeX:
                        @article{Wittig2014,
                          author = {L. Wittig and C. Otte and S. Otte and G. Hüttmann and D. Drömann and A. Schlaefer},
                          title = {Tissue analysis of solitary pulmonary nodules using OCT A-Scan imaging needle probe},
                          journal = {The European respiratory journal},
                          year = {2014},
                          volume = {44},
                          number = {58},
                          pages = {4979},
                          note = {The European respiratory journal [0903-1936] Wittig, L J.:2014 Bd.:44 iss:Suppl 58 S.:P4979},
                          url = {http://erj.ersjournals.com/content/44/Suppl_58/P4979.short#}
                        }
                        

                        2013

                        • R. Dürichen, O. Blanck, J. Dunst, G. Hildebrandt, A. Schlaefer, A. Schweikard (2013), "Atemphasenabhängige Prädiktionsfehler in der extrakraniellen stereotaktischen Strahlentherapie", In Deutschen Gesellschaft für Radioonkologie (DEGRO). [Abstract] [BibTeX]
                        • Abstract: Fragestellung: Die Kompensation von atembedingten Tumorbewegungen während der extrakraniellen stereotaktischen Strahlentherapie ist nicht trivial. Beim robotergestützten CyberKnife, einem immer häufiger eingesetzten aktive Bewegungskompensationssystem, werden dabei interne Tumorbewegungen mit optischen Marker auf der Brust korreliert und während der Behandlung mittels Prädiktionsalgorithmen vorhergesagt, um den Roboter synchron mit dem Tumor bewegen zu können. Genauigkeitsanalysen des Systems wurden mehrfach publiziert. Wir untersuchten nun, in wie weit die Prädiktionsgenauigkeit von der Atemphase abhängig ist, um Fehler besser bewerten und Patienten besser auf die Behandlung vorbereiten zu können. Methodik: Für die Analyse verwendeten wir Patientendaten von 37 Abdominellen- und 34 Lungenbehandlungen (143 bzw. 124 Fraktionen), die in unserem Zentrum behandelt wurden. Zur Untersuchung des Prädiktionsfehlers unterteilten wir die gemessene und prädizierte Atembewegung jedes Patienten, die in den CyberKnife Log-Dateien gespeichert werden, in je 10 Atemphasen. Anschließend berechneten wir die mittleren und maximalen Prädiktionsfehlers pro Phase in x, y und z Richtung und in 3D und verglichen die jeweiligen Atemphasen miteinander. Ergebnis: Der Median für den mittleren 3D Prädiktionsfehler für alle Leberpatienten betrug exemplarisch 0.14 mm (Phase 1), 0.09 mm (Phase 3), 0.07 mm (Phase 5) und 0.08 mm (Phase 8). Für Lungenpatienten beträgt der Median exemplarisch 0.05 mm (Phase 1), 0.03 mm (Phase 3), 0.03 mm (Phase 5) und 0.03 mm (Phase 8). Im Mittel sind die Prädiktionsfehler für Leberpatienten größer als für Lungenpatienten, was auf die größeren Bewegungen in der Leber zurückzuführen sein mag. Der maximale mittlere Prädiktionsfehler für einen Leberpatienten beträgt 0.7 mm (Phase 1) und für einen Lungenpatienten 0.81 mm (Phase 3). Die Ergebnisse zeigen, dass eine große Abhängigkeit zwischen Atemphasen und Prädiktionsfehler besteht. Der Prädiktionsfehler ist besonders an den Übergängen von Ex- zu Inspiration (Phase 1/10) größer als in den Phasen von reiner Inspiration (Phase 3), Exspirationen (Phase 8) und Übergang von In- zu Exspiration (Phase 5/6). Diese Fehlerverteilung ist für Lungen und Leber Patienten gleich. Schlussfolgerung: Diese Analyse zeigt, dass der mittlere atemphasenabhängige Prädiktionsfehler für Lungen- und Leberpatienten zwar sehr klein ist, jedoch für einzelne Patienten dennoch teilweise hohe mittlere Prädiktionsfehler auftreten können. Dies ist vor allem dadurch bedingt, dass die Prädiktion der Atmungsbewegung nach längerer Ruhephase (Ausatmung) schwierig ist. Besseres Training der Patienten hinsichtlich einer regelmäßigen Atmung ohne Pause könnten hier Verbesserungen schaffen. Als nächsten Schritt planen wir den dosimetrischen Effekt für Patienten mit einem hohen Prädiktionsfehler genauer zu untersuchen. Zusätzlich soll das Potential von neueren Prädiktionsalgorithmen untersucht werden.
                          BibTeX:
                          @conference{Duerichen2013,
                            author = {R. Dürichen and O. Blanck and J. Dunst and G. Hildebrandt and A. Schlaefer and A. Schweikard},
                            title = {Atemphasenabhängige Prädiktionsfehler in der extrakraniellen stereotaktischen Strahlentherapie},
                            booktitle = {Deutschen Gesellschaft für Radioonkologie (DEGRO)},
                            year = {2013}
                          }
                          
                        • F. Ernst, R. Dürichen, A. Schlaefer, A. Schweikard (2013), "Evaluating and comparing algorithms for respiratory motion prediction", Phys Med Biol. Vol. 58(11),3911-3929. [BibTeX] [DOI]
                        • BibTeX:
                          @article{Ernst2013,
                            author = {F. Ernst and R. Dürichen and A. Schlaefer and A. Schweikard},
                            title = {Evaluating and comparing algorithms for respiratory motion prediction},
                            journal = {Phys Med Biol},
                            year = {2013},
                            volume = {58},
                            number = {11},
                            pages = {3911-3929},
                            doi = {10.1088/0031-9155/58/11/3911}
                          }
                          
                        • J. Hagenah, M. Scharfschwerdt, B. Stender, S. Ott, R. Friedl, H. Sievers, A. Schlaefer (2013), "A setup for ultrasound based assessment of the aortic root geometry", Biomedical Engineering/Biomedizinische Technik. Vol. 58(Suppl. 1) [BibTeX] [DOI]
                        • BibTeX:
                          @article{Hagenah2013,
                            author = {J. Hagenah and M. Scharfschwerdt and B. Stender and S. Ott and R. Friedl and H.H. Sievers and A. Schlaefer},
                            title = {A setup for ultrasound based assessment of the aortic root geometry},
                            journal = {Biomedical Engineering/Biomedizinische Technik},
                            year = {2013},
                            volume = {58},
                            number = {Suppl. 1},
                            doi = {10.1515/bmt-2013-4379}
                          }
                          
                        • F. Hartmann A. Schlaefer (2013), "Feasibility of touch-less control of operating room lights", Int J Comput Assist Radiol Surg. Vol. 8(2),259-268. [Abstract] [BibTeX] [DOI]
                        • Abstract: PURPOSE: Todayś highly technical operating rooms lead to fairly complex surgical workflows where the surgeon has to interact with a number of devices, including the operating room light. Hence, ideally, the surgeon could direct the light without major disruption of his work. We studied whether a gesture tracking-based control of an automated operating room light is feasible. METHODS: So far, there has been little research on control approaches for operating lights. We have implemented an exemplary setup to mimic an automated light controlled by a gesture tracking system. The setup includes a articulated arm to position the light source and an off-the-shelf RGBD camera to detect the user interaction. We assessed the tracking performance using a robot-mounted hand phantom and ran a number of tests with 18 volunteers to evaluate the potential of touch-less light control. RESULTS: All test persons were comfortable with using the gesture-based system and quickly learned how to move a light spot on flat surface. The hand tracking error is direction-dependent and in the range of several centimeters, with a standard deviation of less than 1 mm and up to 3.5 mm orthogonal and parallel to the finger orientation, respectively. However, the subjects had no problems following even more complex paths with a width of less than 10 cm. The average speed was 0.15 m/s, and even initially slow subjects improved over time. Gestures to initiate control can be performed in approximately 2 s. Two-thirds of the subjects considered gesture control to be simple, and a majority considered it to be rather efficient. CONCLUSIONS: Implementation of an automated operating room light and touch-less control using an RGBD camera for gesture tracking is feasible. The remaining tracking error does not affect smooth control, and the use of the system is intuitive even for inexperienced users.
                          BibTeX:
                          @article{Hartmann2013,
                            author = {F. Hartmann and A. Schlaefer,},
                            title = {Feasibility of touch-less control of operating room lights},
                            journal = {Int J Comput Assist Radiol Surg},
                            year = {2013},
                            volume = {8},
                            number = {2},
                            pages = {259-268},
                            doi = {10.1007/s11548-012-0778-2}
                          }
                          
                        • C. Otte, S. Otte, L. Wittig, G. Hüttmann, D. Drömann, A. Schlaefer (2013), "Identifizierung von Tumorgewebe in der Lunge mittels optischer Kohärenztomographie", In 58. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e.V. (GMDS)., Sep, 2013. [BibTeX] [DOI] [URL]
                        • BibTeX:
                          @conference{Otte2013a,
                            author = {C. Otte and S. Otte and L. Wittig and G. Hüttmann and D. Drömann and A. Schlaefer},
                            title = {Identifizierung von Tumorgewebe in der Lunge mittels optischer Kohärenztomographie},
                            booktitle = {58. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e.V. (GMDS)},
                            year = {2013},
                            url = {http://www.egms.de/static/en/meetings/gmds2013/13gmds069.shtml},
                            doi = {10.3205/13gmds069}
                          }
                          
                        • S. Otte, C. Otte, A. Schlaefer, L. Wittig, G. Huttmann, D. Drömann, A. Zell (2013), "OCT A-Scan based lung tumor tissue classification with Bidirectional Long Short Term Memory networks", In Machine Learning for Signal Processing (MLSP), 2013 IEEE International Workshop on. ,1-6. [BibTeX] [DOI] [URL]
                        • BibTeX:
                          @inproceedings{Otte2013,
                            author = {S. Otte and C. Otte and A. Schlaefer and L. Wittig and G. Huttmann and D. Drömann and A. Zell},
                            title = {OCT A-Scan based lung tumor tissue classification with Bidirectional Long Short Term Memory networks},
                            booktitle = {Machine Learning for Signal Processing (MLSP), 2013 IEEE International Workshop on},
                            year = {2013},
                            pages = {1-6},
                            url = {http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6661944},
                            doi = {10.1109/MLSP.2013.6661944}
                          }
                          
                        • L. Richter, P. Trillenberg, A. Schweikard, A. Schlaefer (2013), "Stimulus intensity for hand held and robotic transcranial magnetic stimulation", Brain Stimul. Vol. 6(3),315-321. [Abstract] [BibTeX] [DOI]
                        • Abstract: Background: Transcranial Magnetic Stimulation (TMS) is based on a changing magnetic field inducing an electric field in the brain. Conventionally, the TMS coil is mounted to a static holder and the subject is asked to avoid head motion. Additionally, head resting frames have been used. In contrast, our robotized TMS system employs active motion compensation (MC) to maintain the correct coil position. Objective/hypothesis: We study the effect of patient motion on TMS. In particular, we compare different coil positioning techniques with respect to the induced electric field. Methods: We recorded head motion for six subjects in three scenarios: (a) avoiding head motion, (b) using a head rest, and (c) moving the head freely. Subsequently, the motion traces were replayed using a second robot to move a sensor to measure the electric field in the target region. These head movements were combined with 2 types of coil positioning: (1) using a coil holder and (2) using robotized TMS with MC. Results: After 30 min the induced electric field was reduced by 32.0% and 19.7% for scenarios (1a) and (1b), respectively. For scenarios (2a)–(2c) it was reduced by only 4.9%, 1.4% and 2.0%, respectively, which is a significant improvement (P < 0.05). Furthermore, the orientation of the induced field changed by 5.5°, 7.6°, 0.4°, 0.2°, 0.2° for scenarios (1a)–(2c).
                          BibTeX:
                          @article{Richter2013,
                            author = {L. Richter and P. Trillenberg and A. Schweikard and A. Schlaefer},
                            title = {Stimulus intensity for hand held and robotic transcranial magnetic stimulation},
                            journal = {Brain Stimul},
                            year = {2013},
                            volume = {6},
                            number = {3},
                            pages = {315-321},
                            doi = {10.1016/j.brs.2012.06.002}
                          }
                          
                        • A. Schlaefer, T. Viulet, A. Muacevic, C. Fürweger (2013), "Multicriteria optimization of the spatial dose distribution", Med Phys.. Thesis at: Medical Robotics Group, Universita?t zu Lu?beck, Lu?beck 23562, Germany and Institute of Medical Technology, Hamburg University of Technology, Hamburg 21073, Germany.., Dec, 2013. Vol. 40(12),121720. [Abstract] [BibTeX] [DOI]
                        • Abstract: Treatment planning for radiation therapy involves trade-offs with respect to different clinical goals. Typically, the dose distribution is evaluated based on few statistics and dose-volume histograms. Particularly for stereotactic treatments, the spatial dose distribution represents further criteria, e.g., when considering the gradient between subregions of volumes of interest. The authors have studied how to consider the spatial dose distribution using a multicriteria optimization approach.The authors have extended a stepwise multicriteria optimization approach to include criteria with respect to the local dose distribution. Based on a three-dimensional visualization of the dose the authors use a software tool allowing interaction with the dose distribution to map objectives with respect to its shape to a constrained optimization problem. Similarly, conflicting criteria are highlighted and the planner decides if and where to relax the shape of the dose distribution.To demonstrate the potential of spatial multicriteria optimization, the tool was applied to a prostate and meningioma case. For the prostate case, local sparing of the rectal wall and shaping of a boost volume are achieved through local relaxations and while maintaining the remaining dose distribution. For the meningioma, target coverage is improved by compromising low dose conformality toward noncritical structures. A comparison of dose-volume histograms illustrates the importance of spatial information for achieving the trade-offs.The results show that it is possible to consider the location of conflicting criteria during treatment planning. Particularly, it is possible to conserve already achieved goals with respect to the dose distribution, to visualize potential trade-offs, and to relax constraints locally. Hence, the proposed approach facilitates a systematic exploration of the optimal shape of the dose distribution.
                          BibTeX:
                          @article{Schlaefer2013,
                            author = {A. Schlaefer and T. Viulet and A. Muacevic and C. Fürweger},
                            title = {Multicriteria optimization of the spatial dose distribution},
                            journal = {Med Phys},
                            school = {Medical Robotics Group, Universita?t zu Lu?beck, Lu?beck 23562, Germany and Institute of Medical Technology, Hamburg University of Technology, Hamburg 21073, Germany.},
                            year = {2013},
                            volume = {40},
                            number = {12},
                            pages = {121720},
                            doi = {10.1118/1.4828840}
                          }
                          
                        • O. Shahin, M. Kleemann, A. Schlaefer (2013), "Monitoring tumor location in navigated laparoscopic liver surgery", Computer Assisted Radiology and Surgery (CARS)., In Computer Assisted Radiology and Surgery (CARS). [BibTeX]
                        • BibTeX:
                          @conference{Shahin2013,
                            author = {O. Shahin and M. Kleemann and A. Schlaefer},
                            title = {Monitoring tumor location in navigated laparoscopic liver surgery},
                            booktitle = {Computer Assisted Radiology and Surgery (CARS)},
                            journal = {Computer Assisted Radiology and Surgery (CARS)},
                            year = {2013}
                          }
                          
                        • B. Stender, O. Blanck, B. Wang, A. Schlaefer (2013), "An active shape model for porcine whole heart segmentation from multi-slice computed tomography images", Computer Assisted Radiology and Surgery (CARS'2013). [BibTeX]
                        • BibTeX:
                          @conference{Stender2013,
                            author = {B. Stender and O. Blanck and B. Wang and A. Schlaefer},
                            title = {An active shape model for porcine whole heart segmentation from multi-slice computed tomography images},
                            journal = {Computer Assisted Radiology and Surgery (CARS'2013)},
                            year = {2013}
                          }
                          
                        • B. Stender, F. Ernst, B. Wang, Z. Zhang, A. Schlaefer (2013), "Motion compensation of optical mapping signals from isolated beating rat hearts", Applications of Digital Image Processing XXXVI., In SPIE Optical Engineering+ Applications. Vol. 8856,88561C-88561C. [Abstract] [BibTeX]
                        • Abstract: Optical mapping is a well established technique for recording monophasic action potential traces on the epicardial surface of isolated hearts. This measuring technique offers a high spatial resolution but it is sensitive towards myocardial motion. Motion artifacts occur because the mapping between a certain tissue portion sending out fluorescent light and a pixel of the photo detector changes over time. So far this problem has been addressed by suppressing the motion or ratiometric imaging. We developed a different approach to compensate the motion artifacts based on image registration. We could demonstrate how an image deformation field temporally changing with the heart motion could be determined. Using these deformation field time series for image transformation motion signals could be generated for each image pixel which were then successfully applied to remove baseline shift and compensate motion artifacts potentially leading to errors within maps of the first arrival time. The investigation was based on five different rat hearts stained with Di-4-ANEPPS
                          BibTeX:
                          @inproceedings{Stender2013a,
                            author = {B. Stender and F. Ernst and B. Wang and Z.X. Zhang and A. Schlaefer},
                            title = {Motion compensation of optical mapping signals from isolated beating rat hearts},
                            booktitle = {SPIE Optical Engineering+ Applications},
                            journal = {Applications of Digital Image Processing XXXVI},
                            year = {2013},
                            volume = {8856},
                            pages = {88561C-88561C}
                          }
                          
                        • B. Wang, B. Stender, T. Long, Z. Zhang, A. Schlaefer (2013), "An approach to validate ultrasound surface segmentation of the heart", Biomedical Engineering / Biomedizinische Technik. Vol. 58(Suppl. 1) [BibTeX] [DOI]
                        • BibTeX:
                          @article{Wang2013,
                            author = {B. Wang and B. Stender and T. Long and Z. Zhang and A. Schlaefer},
                            title = {An approach to validate ultrasound surface segmentation of the heart},
                            journal = {Biomedical Engineering / Biomedizinische Technik},
                            year = {2013},
                            volume = {58},
                            number = {Suppl. 1},
                            doi = {10.1515/bmt-2013-4283}
                          }
                          

                          2012

                          • O. Blanck, N. Andratschke, H.-W. Breyer, C. Stubert, A. Schlaefer, A. Schweikard, D. Rades, J. Dunst, G. Hildebrandt (2012), "Erweiterte Qualitätssicherung für die robotergestützte Radiochirurgie" [BibTeX]
                          • BibTeX:
                            @conference{Blanck2012,
                              author = {O. Blanck and N. Andratschke and H.-W. Breyer and C. Stubert and A. Schlaefer and A. Schweikard and D. Rades and J. Dunst and G. Hildebrandt},
                              title = {Erweiterte Qualitätssicherung für die robotergestützte Radiochirurgie},
                              year = {2012}
                            }
                            
                          • O. Blanck, G. Hildebrandt, J. Dunst, A. Schweikard, A. Schlaefer (2012), "Ein Review zur Bestrahlungsplanung für die robotergestützte Radiochirurgie ? Konzepte, Klinikeinsatz und Ausblick auf zukünftige Entwicklungen", In DGMP Jahrestagung 2012 Jena. [BibTeX]
                          • BibTeX:
                            @inproceedings{Blanck2012a,
                              author = {O. Blanck and G. Hildebrandt and J. Dunst and A. Schweikard and A. Schlaefer},
                              title = {Ein Review zur Bestrahlungsplanung für die robotergestützte Radiochirurgie ? Konzepte, Klinikeinsatz und Ausblick auf zukünftige Entwicklungen},
                              booktitle = {DGMP Jahrestagung 2012 Jena},
                              year = {2012}
                            }
                            
                          • O. Blanck, J. Krause, R. Dürichen, N. Andratschke, S. Wurster, A. Kovacs, G. Gaffke, K. Bogun, D. Rades, M. Birth, J. Dunst, G. Hildebrandt, A. Schweikard, A. Schlaefer (2012), "Pilotstudie zur Analyse der klinischen Genauigkeit der robotergestützten Radiochirurgie für Lebermetastasen", In DEGRO - Deutsche Gesellschaft für Radioonkologie e.V. Jahrestagung. [BibTeX]
                          • BibTeX:
                            @inproceedings{Blanck2012b,
                              author = {O. Blanck and J. Krause and R. Dürichen and N. Andratschke and S. Wurster and A. Kovacs and G. Gaffke and K.R. Bogun and D. Rades and M. Birth and J. Dunst and G. Hildebrandt and A. Schweikard and A. Schlaefer},
                              title = {Pilotstudie zur Analyse der klinischen Genauigkeit der robotergestützten Radiochirurgie für Lebermetastasen},
                              booktitle = {DEGRO - Deutsche Gesellschaft für Radioonkologie e.V. Jahrestagung},
                              year = {2012}
                            }
                            
                          • O. Blanck, J. Krause, R. Dürichen, S. Wurster, N. Andratschke, D. Rades, G. Hildebrandt, J. Dunst, A. Schweikard, A. Schlaefer (2012), "Retrospective Accuracy Estimation for Motion Compensated Robotic Radiosurgery of the Liver", Medical Physics. Vol. 39(3985) [Abstract] [BibTeX] [DOI]
                          • Abstract: Purpose: The CyberKnife™ compensates translational target motion by moving the beams synchronously. While the system was found to operate with sub?millimeter accuracy in phantoms, determining the clinical accuracy is challenging. Measuring the delivered dose distribution inside a patient is impractical. Hence an analysis of treatment data is typically used to estimate residual errors. Methods: We implant 3?5 fiducials for target tracking and treat livertumors in 3?5 fractions with 45Gy at 80% to the PTV (CTV+3mm). Patients are aligned based on X?ray images in expiration breath hold. During delivery, X?ray images are acquired every 60?90s, and the translational and rotational misalignment is computed. We grouped this data into 10 respiratory phases. The mean misalignment for each phase was used to simulate the translation and rotation of the target with respect to the alignment center. The resulting dose distribution was computed and compared to the planned dose. Additionally, the quality of motion prediction was evaluated. Results: We analyzed 5 cases with a total of 17 fractions. The maximal target motion per fraction ranged from 9.2mm to 25.7mm (3D trajectory). The mean error for each patient ranged from ?0.76/?0.01/? 0.32mm to 0.35/0.17/0.10mm (Translation IS/LR/AP) and ?0.94/?0.82/?2.07 degrees to 0.24/1.95/2.36 degrees (Rotation roll/pitch/yaw). The dose simulation showed point dose difference for each patient ranging from ? 0.10Gy to ?0.76Gy (Mean) and ?1.13Gy to ?5.05Gy (Max). The resulting reduction in coverage ranged from 0.37% to 4.19% (PTV) and ?0.43% to +0.94% (CTV). Finally, the mean prediction error over all fractions was 0.33mm. Conclusions: We demonstrated that while maximum point dose differences can be considerable, the coverage of the CTV is maintained even in the presence of substantial respiratory motion. The results indicate that the standard 3mm system uncertainty margin can account for errors due to rotation and deformation during roboticradiosurgery for tumors in the liver.
                            BibTeX:
                            @article{Blanck2012c,
                              author = {O. Blanck and J. Krause and R. Dürichen and S. Wurster and N. Andratschke and D. Rades and G. Hildebrandt and J. Dunst and A. Schweikard and A. Schlaefer},
                              title = {Retrospective Accuracy Estimation for Motion Compensated Robotic Radiosurgery of the Liver},
                              journal = {Medical Physics},
                              year = {2012},
                              volume = {39},
                              number = {3985},
                              doi = {10.1118/1.4736257}
                            }
                            
                          • F. Ernst, L. Richter, L. Matthäus, V. Martens, R. Bruder, A. Schlaefer, A. Schweikard (2012), "Non-orthogonal tool/flange and robot/world calibration", Int J Med Robot. Vol. 8(4),407-420. [Abstract] [BibTeX] [DOI]
                          • Abstract: BACKGROUND: For many robot-assisted medical applications, it is necessary to accurately compute the relation between the robotś coordinate system and the coordinate system of a localisation or tracking device. Today, this is typically carried out using hand-eye calibration methods like those proposed by Tsai/Lenz or Daniilidis. METHODS: We present a new method for simultaneous tool/flange and robot/world calibration by estimating a solution to the matrix equation AX?\?YB. It is computed using a least-squares approach. Because real robots and localisation are all afflicted by errors, our approach allows for non-orthogonal matrices, partially compensating for imperfect calibration of the robot or localisation device. We also introduce a new method where full robot/world and partial tool/flange calibration is possible by using localisation devices providing less than six degrees of freedom (DOFs). The methods are evaluated on simulation data and on real-world measurements from optical and magnetical tracking devices, volumetric ultrasound providing 3-DOF data, and a surface laser scanning device. We compare our methods with two classical approaches: the method by Tsai/Lenz and the method by Daniilidis. RESULTS: In all experiments, the new algorithms outperform the classical methods in terms of translational accuracy by up to 80% and perform similarly in terms of rotational accuracy. Additionally, the methods are shown to be stable: the number of calibration stations used has far less influence on calibration quality than for the classical methods. CONCLUSION: Our work shows that the new method can be used for estimating the relationship between the robotś and the localisation deviceś coordinate systems. The new method can also be used for deficient systems providing only 3-DOF data, and it can be employed in real-time scenarios because of its speed.
                            BibTeX:
                            @article{Ernst2012,
                              author = {F. Ernst and L. Richter and L. Matthäus and V. Martens and R. Bruder and A. Schlaefer and A. Schweikard},
                              title = {Non-orthogonal tool/flange and robot/world calibration},
                              journal = {Int J Med Robot},
                              year = {2012},
                              volume = {8},
                              number = {4},
                              pages = {407-420},
                              doi = {10.1002/rcs.1427}
                            }
                            
                          • M. Finke, S. Kantelhardt, A. Schlaefer, R. Bruder, E. Lankenau, A. Giese, A. Schweikard (2012), "Automatic scanning of large tissue areas in neurosurgery using optical coherence tomography", Int J Med Robot. Vol. 8(3),327-336. [Abstract] [BibTeX] [DOI]
                          • Abstract: BACKGROUND: With its high spatial and temporal resolution, optical coherence tomography (OCT) is an ideal modality for intra-operative imaging. One possible application is to detect tumour invaded tissue in neurosurgery, e.g. during complete resection of glioblastoma. Ideally, the whole resection cavity is scanned. However, OCT is limited to a small field of view (FOV) and scanning perpendicular to the tissue surface. METHODS: We present a new method to use OCT for scanning of the resection cavity during neurosurgical resection of brain tumours. The main challenges are creating a map of the cavity, scanning perpendicular to the surface and merging the three-dimensional (3D) data for intra-operative visualization and detection of residual tumour cells. RESULTS: Our results indicate that the proposed method enables creating high-resolution maps of the resection cavity. An overlay of these maps with the microscope images provides the surgeon with important information on the location of residual tumour tissue underneath the surface. CONCLUSION: We demonstrated that it is possible to automatically acquire an OCT image of the complete resection cavity. Overlaying microscopy images with depth information from OCT could lead to improved detection of residual tumour cells.
                            BibTeX:
                            @article{Finke2012,
                              author = {M. Finke and S. Kantelhardt and A. Schlaefer and R. Bruder and E. Lankenau and A. Giese and A. Schweikard},
                              title = {Automatic scanning of large tissue areas in neurosurgery using optical coherence tomography},
                              journal = {Int J Med Robot},
                              year = {2012},
                              volume = {8},
                              number = {3},
                              pages = {327-336},
                              doi = {10.1002/rcs.1425}
                            }
                            
                          • F. Gasca, T. Wissel, H. Hadjar, A. Schlaefer, A. Schweikard (2012), "Sparsely optimized multi-electrode transcranial direct current stimulation", In Bernstein Conference, Frontiers in Computational Neuroscience. Vol. 6(136) Frontiers Media. [BibTeX] [DOI]
                          • BibTeX:
                            @inproceedings{Gasca2012,
                              author = {F. Gasca and T. Wissel and H. Hadjar and A. Schlaefer and A. Schweikard},
                              title = {Sparsely optimized multi-electrode transcranial direct current stimulation},
                              booktitle = {Bernstein Conference, Frontiers in Computational Neuroscience},
                              publisher = {Frontiers Media},
                              year = {2012},
                              volume = {6},
                              number = {136},
                              doi = {10.3389/conf.fncom.2012.55.00136}
                            }
                            
                          • H. Hadjar, R. Friedl, H.-H. Sievers, P. Hunold, B. Stender, A. Schlaefer (2012), "Patient-specific finite-element simulation of aortic valve-sparing surgery", In Computer Assisted Radiology and Surgery (CARS). (Computer Assisted Radiology and Surgery (CARS)) [BibTeX]
                          • BibTeX:
                            @inproceedings{Hadjar2012,
                              author = {H. Hadjar and R. Friedl and H.-H. Sievers and P. Hunold and B. Stender and A. Schlaefer},
                              title = {Patient-specific finite-element simulation of aortic valve-sparing surgery},
                              booktitle = {Computer Assisted Radiology and Surgery (CARS)},
                              year = {2012},
                              number = {Computer Assisted Radiology and Surgery (CARS)}
                            }
                            
                          • M. Heinig, U. G. Hofmann, A. Schlaefer (2012), "Calibration of the motor-assisted robotic stereotaxy system: MARS", Int J Comput Assist Radiol Surg. Vol. 7(6),911-920. [Abstract] [BibTeX] [DOI]
                          • Abstract: Background: The motor-assisted robotic stereotaxy system presents a compact and light-weight robotic system for stereotactic neurosurgery. Our system is designed to position probes in the human brain for various applications, for example, deep brain stimulation. It features five fully automated axes. High positioning accuracy is of utmost importance in robotic neurosurgery. Methods: First, the key parameters of the robot’s kinematics are determined using an optical tracking system. Next, the positioning errors at the center of the arc which is equivalent to the target position in stereotactic interventions are investigated using a set of perpendicular cameras. A modeless robot calibration method is introduced and evaluated. To conclude, the application accuracy of the robot is studied in a phantom trial. Results: We identified the bending of the arc under load as the robot’s main error source. A calibration algorithm was implemented to compensate for the deflection of the robot’s arc. The mean error after the calibration was 0.26 mm, the 68.27th percentile was 0.32 mm, and the 95.45th was 0.50 mm. Conclusion: The kinematic properties of the robot were measured, and based on the results an appropriate calibration method was derived. With mean errors smaller than currently used mechanical systems, our results show that the robot’s accuracy is appropriate for stereotactic interventions.
                            BibTeX:
                            @article{Heinig2012,
                              author = {M. Heinig and U. G. Hofmann and A. Schlaefer},
                              title = {Calibration of the motor-assisted robotic stereotaxy system: MARS},
                              journal = {Int J Comput Assist Radiol Surg},
                              year = {2012},
                              volume = {7},
                              number = {6},
                              pages = {911-920},
                              doi = {10.1007/s11548-012-0676-7}
                            }
                            
                          • L. Hertel A. Schlaefer (2012), "Data Mining for Optimal Sail and Rudder Control of Small Robotic Sailboats", Robotic Sailing 2012., In Robotic Sailing, Proceedings of the 5th International Robotic Sailing Conference. ,37-48. Springer. [Abstract] [BibTeX] [DOI]
                          • Abstract: Finding the optimal parameter settings to control a sailing robot is an intricate task, as sailing presents a fairly complex problem with a highly non-linear interaction of boat, wind, and water. As no complete mathematical model for sailing is available, we studied how a large set of sensor data gathered in different conditions can be used to obtain parameters. In total, we analyzed approximately 2 million records collected during more than 110 hours of autonomous sailing on 55 different days. The data was preprocessed and episodes of stable sailing were extracted before studying boat, sail and rudder trim with respect to speed, course stability, and energy consumption. Our results highlight the multi-criteria nature of optimizing robotic sailboat control and indicate that a reduced set of preferable parameter settings may be used for effective control.
                            BibTeX:
                            @article{Hertel2012,
                              author = {L. Hertel and A. Schlaefer},
                              title = {Data Mining for Optimal Sail and Rudder Control of Small Robotic Sailboats},
                              booktitle = {Robotic Sailing, Proceedings of the 5th International Robotic Sailing Conference},
                              journal = {Robotic Sailing 2012},
                              publisher = {Springer},
                              year = {2012},
                              pages = {37-48},
                              doi = {10.1007/978-3-642-33084-1_4}
                            }
                            
                          • N. Lessmann, J. Sulikowski, P. Névoa, T. Kral, D. Drömann, A. Schlaefer (2012), "Ein Ansatz zur bewegungskompensierten stereoskopischen Navigation für die Bronchoskopie", In Jahrestagung der deutschen Gesellschaft für computer- und roboterassistierte Chirurgie (CURAC). [Abstract] [BibTeX]
                          • Abstract: Die bronchoskopische Diagnostik peripherer Lungentumore wird durch Atembewegungen und die schlechte Sichtbarkeit im Röntgenbild erschwert. Eine genaue 3D Lokalisation von Instrument und Zielgebiet kann nur durch stereoskopische Röntgenbilder erreicht werden. Bei sequentieller Aufnahme mit einem CBogen kann die Atmung zu Verschiebungen führen. Wir beschreiben einen Ansatz, wie anhand eines Markers und eines passiven optischen Lageverfolgungssystems die Lage der Bildebenen unter Ausgleich der Atembewegung bestimmt werden kann. Erste experimentelle Ergebnisse deuten darauf hin, dass mit dem System zum Atemzustand konsistente Bilddaten erfasst werden können. Aus zwei Röntgenbildern aus verschiedenen Richtungen zum gleichen Atemzustand kann die Lage von Bronchoskop und Zielgebiet bestimmt werden
                            BibTeX:
                            @inproceedings{Lessmann2012,
                              author = {N. Lessmann and J. Sulikowski and P. Névoa and T. Kral and D. Drömann and A. Schlaefer},
                              title = {Ein Ansatz zur bewegungskompensierten stereoskopischen Navigation für die Bronchoskopie},
                              booktitle = {Jahrestagung der deutschen Gesellschaft für computer- und roboterassistierte Chirurgie (CURAC)},
                              year = {2012}
                            }
                            
                          • T. Neumann A. Schlaefer (2012), "Feasibility of Basic Visual Navigation for Small Robotic Sailboats", Robotic Sailing. ,13-22. Springer. [Abstract] [BibTeX]
                          • Abstract: Image based navigation is a key research focus for many robotic applications. One complication for small sailing robots is their limited buoyancy and rather rapid motion. We studied whether it would still be feasible to use video data for basic navigation in an inshore race course scenario. Particularly, we considered methods for detecting the horizon and buoys, as well as estimating rotations via optical flow. All methods have been tested on a set of manually annotated scenes representing different sailing and lighting conditions. The results show that detection rates of more than 80% for the horizon and more than 94% for buoys can be achieved. Moreover, a comparison of the average optical flow with compass data indicates that rotations of the boat can be estimated. Hence, the methods should be considered in addition to other sensors.
                            BibTeX:
                            @inproceedings{Neumann2012,
                              author = {T. Neumann and A. Schlaefer},
                              title = {Feasibility of Basic Visual Navigation for Small Robotic Sailboats},
                              journal = {Robotic Sailing},
                              publisher = {Springer},
                              year = {2012},
                              pages = {13-22}
                            }
                            
                          • C. Otte, G. Hüttmann, G. Kovács, A. Schlaefer (2012), "Phantom validation of optical soft tissue navigation for brachytherapy", MICCAI Workshop on Image-Guidance and Multimodal Dose Planning in Radiation Therapy. ,96-100. [Abstract] [BibTeX]
                          • Abstract: In high dose rate brachytherapy, needles are inserted into soft tissue and subsequently radioactive sources are used to deliver a high dose inside the target region. While this approach can achieve a steep dose gradient and o ers a focused, organ sparing treatment, it also requires a careful positioning of the needles with respect to the tissue. We have previously proposed to use an optical ber embedded in the needle to detect soft tissue deformation. To validate the approach, we have developed an experimental setup to compare the actual needle motion with the motion estimated via the ber. Our results show a good agreement between actual and estimated motion, indicating that optical deformation detection through the needle is possible.
                            BibTeX:
                            @article{Otte2012a,
                              author = {C. Otte and G. Hüttmann and G. Kovács and A. Schlaefer},
                              title = {Phantom validation of optical soft tissue navigation for brachytherapy},
                              journal = {MICCAI Workshop on Image-Guidance and Multimodal Dose Planning in Radiation Therapy},
                              year = {2012},
                              pages = {96-100}
                            }
                            
                          • C. Otte, G. Hüttmann, A. Schlaefer (2012), "Feasibility of optical detection of soft tissue deformation during needle insertion", SPIE Medical Imaging. Vol. 8316,8316-8316-11. [BibTeX] [DOI]
                          • BibTeX:
                            @article{Otte2012,
                              author = {C. Otte and G. Hüttmann and A. Schlaefer},
                              title = {Feasibility of optical detection of soft tissue deformation during needle insertion},
                              journal = {SPIE Medical Imaging},
                              year = {2012},
                              volume = {8316},
                              pages = {8316-8316-11},
                              doi = {10.1117/12.912538}
                            }
                            
                          • L. Richter, P. Trillenberg, A. Schweikard, A. Schlaefer (2012), "Comparison of stimulus intensity in hand held and robotized motion compensated transcranial magnetic stimulation", Neurophysiologie Clinique/Clinical Neurophysiology. Vol. 42,61-62. [Abstract] [BibTeX] [DOI] [URL]
                          • Abstract: Transcranial Magnetic Stimulation (TMS) is based on a changing magnetic field passing through the skull and inducing an electric field in the cortex [1,2]. The latter results in cortical stimulation and needs to be aligned with the target region. Conventionally, the TMS coil is mounted to a static holder and the subject is asked to avoid head motion. Additionally, head resting frames have been used [3]. In contrast, our robotized TMS system employs active motion compensation (MC) to maintain the correct coil position [4]. To assess the potential impact of patient motion, we study the induced electric field for the different setups. We recorded 30 min of head motion for six subjects in three scenarios: (a) using a coil holder and avoiding head motion, (b) using a coil holder and a head rest, and (c) using the robotized system with motion compensation. The motion traces were fed into a second robot to mimic head motion for a field sensor integrated in a head phantom. We found that after 30 minutes the induced electric field was reduced by 32.0% and 19.7% for scenarios (a) and (b), respectively. For scenario (c) it was reduced by only 4.9%. Furthermore, the orientation of the induced field changed by 5.5°, 7.6°, and 0.4° for scenarios (a), (b), and (c), respectively. None of the scenarios required rigid head fixation [5], which is often considered impractical and uncomfortable. We conclude that active motion compensation is a viable approach to maintain a stable stimulation during TMS treatments.
                            BibTeX:
                            @article{Richter2012,
                              author = {L. Richter and P. Trillenberg and A. Schweikard and A. Schlaefer},
                              title = {Comparison of stimulus intensity in hand held and robotized motion compensated transcranial magnetic stimulation},
                              journal = {Neurophysiologie Clinique/Clinical Neurophysiology},
                              year = {2012},
                              volume = {42},
                              pages = {61-62},
                              url = {http://www.sciencedirect.com/science/article/pii/S0987705311001869},
                              doi = {10.1016/j.neucli.2011.11.028}
                            }
                            
                          • O. Shahin, V. Martens, A. Beširevic, M. Kleemann, A. Schlaefer (2012), "Localization of liver tumors in freehand 3D laparoscopic ultrasound", Medical Imaging., In SPIE Medical Imaging., February, 2012. Vol. 8316,83162C. [Abstract] [BibTeX] [DOI]
                          • Abstract: The aim of minimally invasive laparoscopic liver interventions is to completely resect or ablate tumors while minimizing the trauma caused by the operation. However, restrictions such as limited field of view and reduced depth perception can hinder the surgeonś capabilities to precisely localize the tumor. Typically, preoperative data is acquired to find the tumor(s) and plan the surgery. Nevertheless, determining the precise position of the tumor is required, not only before but also during the operation. The standard use of ultrasound in hepatic surgery is to explore the liver and identify tumors. Meanwhile, the surgeon mentally builds a 3D context to localize tumors. This work aims to upgrade the use of ultrasound in laparoscopic liver surgery. We propose an approach to segment and localize tumors intra-operatively in 3D ultrasound. We reconstruct a 3D laparoscopic ultrasound volume containing a tumor. The 3D image is then preprocessed and semi-automatically segmented using a level set algorithm. During the surgery, for each subsequent reconstructed volume, a fast update of the tumor position is accomplished via registration using the previously segmented and localized tumor as a prior knowledge. The approach was tested on a liver phantom with artificial tumors. The tumors were localized in approximately two seconds with a mean error of less than 0.5 mm. The strengths of this technique are that it can be performed intra-operatively, it helps the surgeon to accurately determine the location, shape and volume of the tumor, and it is repeatable throughout the operation
                            BibTeX:
                            @inproceedings{Shahin2012,
                              author = {O. Shahin and V. Martens and A. Beširevic and M. Kleemann and A. Schlaefer},
                              title = {Localization of liver tumors in freehand 3D laparoscopic ultrasound},
                              booktitle = {SPIE Medical Imaging},
                              journal = {Medical Imaging},
                              year = {2012},
                              volume = {8316},
                              pages = {83162C},
                              doi = {10.1117/12.912375}
                            }
                            
                          • B. Stender, M. Brandenburger, B. Wang, Z. Zhang, A. Schlaefer (2012), "Motion compensation of optical mapping signals from rat heart slices", In Proc. SPIE 8553 Optics in Health Care and Biomedical Optics V. Vol. 8553 [BibTeX] [DOI] [URL]
                          • BibTeX:
                            @inproceedings{Stender2012,
                              author = {B. Stender and M. Brandenburger and B. Wang and Z.X. Zhang and A. Schlaefer},
                              title = {Motion compensation of optical mapping signals from rat heart slices},
                              booktitle = {Proc. SPIE 8553 Optics in Health Care and Biomedical Optics V},
                              year = {2012},
                              volume = {8553},
                              url = {http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=1485450},
                              doi = {10.1117/12.2008992}
                            }
                            
                          • B. Stender, B. Wang, A. Schlaefer (2012), "Computing Synthetic Echocardiography Volumes for Automatic Validation of 3D Segmentation Results", Biomed Tech (Berl). Vol. 57(Suppl. 1) [BibTeX] [DOI]
                          • BibTeX:
                            @article{Stender2012a,
                              author = {B. Stender and B. Wang and A. Schlaefer},
                              title = {Computing Synthetic Echocardiography Volumes for Automatic Validation of 3D Segmentation Results},
                              journal = {Biomed Tech (Berl)},
                              year = {2012},
                              volume = {57},
                              number = {Suppl. 1},
                              note = {[DOI:10.1515/bmt-2012-449210.1515/bmt-2012-4492] [PubMed:http://www.ncbi.nlm.nih.gov/pubmed/2294479022944790]},
                              doi = {10.1515/bmt-2012-4492}
                            }
                            
                          • T. Viulet A. Schlaefer (2012), "SU-E-T-618: Error Compensated Sparse Optimization for Fast Radiosurgery Treatment Planning", Medical Physics., In Medical Physics. Vol. 39(3848-3848) Wiley-Blackwell. [Abstract] [BibTeX] [DOI]
                          • Abstract: Purpose: Radiosurgical treatment planning requires a good approximation of the dose distribution which is typically computed on a high resolution grid. However, the resulting optimization problem is large, and leads to substantial runtime. We study a sparse grid approach, for which we estimate and compensate for the expected deviations from the bounds. Methods: We buildup an estimate of the hotspot error distribution by measuring the maximum dose deviation within a voxel for a large number of randomly generated beam configurations. This results in a conservative estimation of overdosage as a function of upper bound reduction for different grid sizes. We adjust the bounds for voxels inside the target volume (PTV) according to our estimation thus maintaining the likelihood of dose deviations within acceptable limits. The approach was applied to a prostate case, where the volumes of interest are large and close to each other. Our planning objective is a prescribed dose of 36.25 Gy to the 87% isodose. We employed constrained optimization to optimize the lower PTV bound on 2, 4, and 8mm isotropic grids. Results were computed on 1mm grid. Results: The initial coverage was 93.7%, 92%, and 91%, and the volume exceeding the upper bound was 0.74%, 1.71%, and 9% for grid sizes of 2, 4, and 8mm, respectively. Changing the upper bound by 0.5% and 2.5% for the 4 and 8 mm grids resulted in only 0.75% and 2.2% of the volume exceeding the bound. The coverage did not change. Mean optimization times were 141.1, 22.6 and 3.4 minutes using the 2, 4 or 8mm grid, respectively. Conclusions: Experiments show that planning on a sparse grid can achieve comparable results with those of a high resolution grid, as long as the bounds are carefully balanced. This leads to substantially lower optimization times which facilitates interactive planning. This work was supported by the Graduate School for Computing in Medicine and Life Sciences funded by Germanyś Excellence Initiative [DFG GSC 235/1]
                            BibTeX:
                            @inproceedings{Viulet2012,
                              author = {T. Viulet and A. Schlaefer},
                              title = {SU-E-T-618: Error Compensated Sparse Optimization for Fast Radiosurgery Treatment Planning},
                              booktitle = {Medical Physics},
                              journal = {Medical Physics},
                              publisher = {Wiley-Blackwell},
                              year = {2012},
                              volume = {39},
                              number = {3848-3848},
                              doi = {10.1118/1.4735708}
                            }
                            

                            2011

                            • N. Ammann, F. Hartmann, P. Jauer, J. Krüger, T. Meyer, R. Bruder, A. Schlaefer (2011), "Global data storage for collision avoidance in robotic sailboat racing ? the World Server Approach", 4th International Robotic Sailing Conference., In 4th International Robotic Sailing Conference. [BibTeX]
                            • BibTeX:
                              @inproceedings{Ammann2011,
                                author = {N. Ammann and F. Hartmann and P. Jauer and J. Krüger and T. Meyer and R. Bruder and A. Schlaefer},
                                title = {Global data storage for collision avoidance in robotic sailboat racing ? the World Server Approach},
                                booktitle = {4th International Robotic Sailing Conference},
                                journal = {4th International Robotic Sailing Conference},
                                year = {2011}
                              }
                              
                            • R. Bruder, F. Ernst, A. Schlaefer, A. Schweikard (2011), "A Framework for Real-Time Target Tracking in Radiosurgery using Three-dimensional Ultrasound", Computer Assisted Radiology and Surgery (CARS)., In Computer Assisted Radiology and Surgery (CARS). ,306-307. [BibTeX]
                            • BibTeX:
                              @conference{Bruder2011,
                                author = {R. Bruder and F. Ernst and A. Schlaefer and A. Schweikard},
                                title = {A Framework for Real-Time Target Tracking in Radiosurgery using Three-dimensional Ultrasound},
                                booktitle = {Computer Assisted Radiology and Surgery (CARS)},
                                journal = {Computer Assisted Radiology and Surgery (CARS)},
                                year = {2011},
                                pages = {306-307}
                              }
                              
                            • F. Ernst, R. Bruder, A. Schlaefer, A. Schweikard (2011), "Forecasting pulsatory motion for non-invasive cardiac radiosurgery: an analysis of algorithms from respiratory motion prediction", Int J Comput Assist Radiol Surg. Vol. 6(1),93-101. [Abstract] [BibTeX] [DOI]
                            • Abstract: Objective: Recently, radiosurgical treatment of cardiac arrhythmia, especially atrial fibrillation, has been proposed. Using the CyberKnife, focussed radiation will be used to create ablation lines on the beating heart to block unwanted electrical activity. Since this procedure requires high accuracy, the inevitable latency of the system (i.e., the robotic manipulator following the motion of the heart) has to be compensated for. Materials and methods: We examine the applicability of prediction algorithms developed for respiratory motion prediction to the prediction of pulsatory motion. We evaluated the MULIN, nLMS, wLMS, SVRpred and EKF algorithms. The test data used has been recorded using external infrared position sensors, 3D ultrasound and the NavX catheter systems. Results With this data, we have shown that the error from latency can be reduced by at least 10 and as much as 75% (44% average), depending on the type of signal. It has also been shown that, although the SVRpred algorithm was successful in most cases, it was outperformed by the simple nLMS algorithm, the EKF or the wLMS algorithm in a number of cases. Conclusion: We have shown that prediction of cardiac motion is possible and that the algorithms known from respiratory motion prediction are applicable. Since pulsation is more regular than respiration, more research will have to be done to improve frequency-tracking algorithms, like the EKF method, which performed better than expected from their behaviour on respiratory motion traces
                              BibTeX:
                              @article{Ernst2011,
                                author = {F. Ernst and R. Bruder and A. Schlaefer and A. Schweikard},
                                title = {Forecasting pulsatory motion for non-invasive cardiac radiosurgery: an analysis of algorithms from respiratory motion prediction},
                                journal = {Int J Comput Assist Radiol Surg},
                                year = {2011},
                                volume = {6},
                                number = {1},
                                pages = {93-101},
                                doi = {10.1007/s11548-010-0424-9}
                              }
                              
                            • F. Ernst, R. Bruder, A. Schlaefer, A. Schweikard (2011), "Performance Measures and Pre-Processing for Respiratory Motion Prediction", 53rd Annual Meeting of the AAPM., In 53rd Annual Meeting of the AAPM. Vol. 38(6),3857. [Abstract] [BibTeX] [DOI]
                            • Abstract: Purpose: Much research has been done on prediction of respiratory motion traces for motion compensation in radiotherapy. Unfortunately, the results of different groups cannot be compared easily due to different standards in preprocessing and analysis of the results. Furthermore, it has been speculated that the typically used measure for prediction quality, the RMS error, is not sufficient alone. Methods: We propose a set of guidelines for signal preprocessing (i.e., for scaling, detrending, resampling, and denoising) as well as measures for the analysis of the prediction results. The latter complement the RMS error with confidence intervals, the signalś smoothness (called jitter) and a measure for the periodicity of the error (called frequency content). Additionally, we have developed an extendable cross?platform prediction toolkit for easy analysis of prediction algorithms. Results: We found that very different signals (corrupted by noise, scaled by a constant factor, delayed in time, and scaled by random factors for each respiratory period) feature the exact same RMS error when compared to the original signal. The fundamental difference in the error signals can only be determined when using spectral measures, like the frequency content. Conclusion: Using the guidelines developed, the proposed evaluation measures, as well as the publicly available prediction toolkit, should help the community in establishing a better understanding for the capabilities and shortcomings of individual prediction methods. Additionally, it should allow others to more readily compare newly developed methods to already published algorithms. In the future, it would be desirable to also create a database of motion traces from various sources. If these signals would represent the characteristics of motion traces observed in the clinic, it could serve as a general benchmark for the quality of algorithms for motion prediction
                              BibTeX:
                              @inproceedings{Ernst2011b,
                                author = {F. Ernst and R. Bruder and A. Schlaefer and A. Schweikard},
                                title = {Performance Measures and Pre-Processing for Respiratory Motion Prediction},
                                booktitle = {53rd Annual Meeting of the AAPM},
                                journal = {53rd Annual Meeting of the AAPM},
                                year = {2011},
                                volume = {38},
                                number = {6},
                                pages = {3857},
                                doi = {10.1118/1.3613523}
                              }
                              
                            • F. Ernst, R. Bruder, A. Schlaefer, A. Schweikard (2011), "Validating an SVR-based correlation algorithm on human volumetric ultrasound data", Proceedings of the 25th International Congress and Exhibition on Computer Assisted Radiology and Surgery (CARS’11),., In Computer Assisted Radiology and Surgery (CARS). Vol. 6(S1),59-60. [BibTeX]
                            • BibTeX:
                              @inproceedings{Ernst2011c,
                                author = {F. Ernst and R. Bruder and A. Schlaefer and A. Schweikard},
                                title = {Validating an SVR-based correlation algorithm on human volumetric ultrasound data},
                                booktitle = {Computer Assisted Radiology and Surgery (CARS)},
                                journal = {Proceedings of the 25th International Congress and Exhibition on Computer Assisted Radiology and Surgery (CARS’11),},
                                year = {2011},
                                volume = {6},
                                number = {S1},
                                pages = {59-60},
                                edition = {Proceedings of the 25th International Congress and Exhibition on Computer Assisted Radiology and Surgery (CARS'11) Volume 6 of International Journal of Computer Assisted Radiology and Surgery , page accepted.}
                              }
                              
                            • F. Ernst, A. Schlaefer, A. Schweikard (2011), "Predicting the outcome of respiratory motion prediction", Med Phys. Vol. 38(10),5569-5581. [BibTeX] [DOI]
                            • BibTeX:
                              @article{Ernst2011a,
                                author = {F. Ernst and A. Schlaefer and A. Schweikard},
                                title = {Predicting the outcome of respiratory motion prediction},
                                journal = {Med Phys},
                                year = {2011},
                                volume = {38},
                                number = {10},
                                pages = {5569-5581},
                                doi = {10.1118/1.3633907}
                              }
                              
                            • F. Gasca, L. Marshall, S. Binder, A. Schlaefer, U. Hofmann, A. Schweikard (2011), "Finite element simulation of transcranial current stimulation in realistic rat head model", Proceedings of the 5th International IEEE EMBS Conference on Neural Engineering., In 5th International IEEE/EMBS Conference on Neural Engineering (NER). Cancun, Mexico ,36-39. [Abstract] [BibTeX] [DOI] [URL]
                            • Abstract: Transcranial current stimulation (tCS) is a method for modulating neural excitability and is used widely for studying brain function. Although tCS has been used on the rat, there is limited knowledge on the induced electric field distribution during stimulation. This work presents the finite element (FE) simulations of tCS in a realistic rat head model derived from MRI data. We simulated two electrode configurations and analyzed the spatial focality of the induced electric field for three implantation depth scenarios: (1) electrode implanted at the surface of the skull, (2) halfway through the skull and (3) in contact with cerebrospinal fluid. We quantitatively show the change in focality of stimulation with depth. This work emphasizes the importance of performing FE analysis in realistic models as a vital step in the design of tCS rat experiments. This can yield a better understanding of the location and intensity of stimulation, and its correlation to brain function.
                              BibTeX:
                              @inproceedings{Gasca2011,
                                author = {F. Gasca and L. Marshall and S. Binder and A. Schlaefer and U.G. Hofmann and A. Schweikard},
                                title = {Finite element simulation of transcranial current stimulation in realistic rat head model},
                                booktitle = {5th International IEEE/EMBS Conference on Neural Engineering (NER)},
                                journal = {Proceedings of the 5th International IEEE EMBS Conference on Neural Engineering},
                                year = {2011},
                                pages = {36-39},
                                url = {http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=5910483},
                                doi = {10.1109/NER.2011.5910483}
                              }
                              
                            • H. Hadjar, R. Friedl, H.-H. Sievers, P. Hunold, B. Stender, A. Schlaefer (2011), "Surgical planning tool for the simulation of the aortic valve-sparing surgery", Computer Assisted Radiology and Surgery (CARS)., In Computer Assisted Radiology and Surgery (CARS). Berlin [BibTeX]
                            • BibTeX:
                              @inproceedings{Hadjar2011,
                                author = {H. Hadjar and R. Friedl and H.-H. Sievers and P. Hunold and B. Stender and A. Schlaefer},
                                title = {Surgical planning tool for the simulation of the aortic valve-sparing surgery},
                                booktitle = {Computer Assisted Radiology and Surgery (CARS)},
                                journal = {Computer Assisted Radiology and Surgery (CARS)},
                                year = {2011}
                              }
                              
                            • M. Heinig, O. Christ, V. Tronnier, U. Hofmann, A. Schlaefer, A. Schweikard (2011), "Electromagnetic noise measurement of the Motor Assisted Robotic Stereotaxy System (MARS)", Proceedings of the 4th Hamlyn Symposium on Medical Robotics., In Proceedings of the 4th Hamlyn Symposium on Medical Robotics. (4),63-64. [BibTeX]
                            • BibTeX:
                              @inproceedings{Heinig2011,
                                author = {M. Heinig and O. Christ and V. Tronnier and U.G. Hofmann and A. Schlaefer and A. Schweikard},
                                title = {Electromagnetic noise measurement of the Motor Assisted Robotic Stereotaxy System (MARS)},
                                booktitle = {Proceedings of the 4th Hamlyn Symposium on Medical Robotics},
                                journal = {Proceedings of the 4th Hamlyn Symposium on Medical Robotics},
                                year = {2011},
                                number = {4},
                                pages = {63-64}
                              }
                              
                            • M. Heinig, M. Govela, F. Gasca, C. Dold, U. Hofmann, V. Tronnier, A. Schlaefer, A. Schweikard (2011), "MARS - Motor assisted robotic stereotaxy system", Proceedings of the 5th International IEEE EMBS Conference on Neural Engineering., In Proceedings of the 5th International IEEE EMBS Conference on Neural Engineering. Cancun, Mexico ,334-337. [Abstract] [BibTeX] [DOI] [URL]
                            • Abstract: We report on the design, setup and first results of a robotized system for stereotactic neurosurgery. It features three translational and two rotational axes, as well as a motorized MicroDrive, thereby resembling the Zamorano-Duchovny (ZD) design of stereotactic frames (inomed Medizintechnik GmbH). Both rotational axes intersect in one point, the Center of the Arc, facilitating trajectory planning. We used carbon fiber-reinforced plastic to reduce the weight of the system. The robot can be mounted to standard operating tableś side rails and can be transported on an operation theatre (OT) instrument table. We discuss the design paradigms, the resulting design and the actual robot. Kinematic calculations for the robot based on the Denavit-Hartenberg (DH) rules are presented. Positioning accuracy of our system is determined using two perpendicular cameras mounted on an industrial robot. The results are compared to a manual ZD system. We found that the robotś mean position deviation is 0.231 mm with a standard deviation of 0.076 mm
                              BibTeX:
                              @inproceedings{Heinig2011a,
                                author = {M. Heinig and M.F. Govela and F. Gasca and C. Dold and U.G. Hofmann and V. Tronnier and A. Schlaefer and A. Schweikard},
                                title = {MARS - Motor assisted robotic stereotaxy system},
                                booktitle = {Proceedings of the 5th International IEEE EMBS Conference on Neural Engineering},
                                journal = {Proceedings of the 5th International IEEE EMBS Conference on Neural Engineering},
                                year = {2011},
                                pages = {334-337},
                                url = {http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=5910555},
                                doi = {10.1109/NER.2011.5910555}
                              }
                              
                            • C. Otte, R. Ansari, G. Kovács, M. Sommerauer, G. Hüttmann, A. Schlaefer (2011), "Kompensation von Bewegungsartefakten beim Einbringen von Brachytherapienadeln", Informatik aktuell. ,444-448. [Abstract] [BibTeX]
                            • Abstract: Beim Einbringen von Nadeln in Weichgewebe kommt es zu Bewegungen und Deformationen. Diese beeinträchtigen besonders hochaufgelöste Bildgebungsverfahren mit geringer Eindringtiefe wie die optische Kohärenztomographie, die jedoch auch kleine Strukturen aufl ösen kann und eine Möglichkeit zur „optischen Biopsie“ darstellt. Die Korrektur von Bewegungsartefakten auf Basis anderer Bilddaten wie beispielsweise perkutaner / transrektaler Ultraschall wird durch die Nadeln erschwert. Wir untersuchen, ob auftretende Gewebedeformationen mit Hilfe einer Kraftmomentensensorik berücksichtigt werden können. Die Ergebnisse deuten auf einen deutlichen Zusammenhang von Gewebedeformation und auftretender Kräfte hin
                              BibTeX:
                              @inbook{Otte2011,
                                author = {C. Otte and R. Ansari and G. Kovács and M. Sommerauer and G. Hüttmann and A. Schlaefer},
                                title = {Kompensation von Bewegungsartefakten beim Einbringen von Brachytherapienadeln},
                                journal = {Informatik aktuell},
                                year = {2011},
                                pages = {444-448}
                              }
                              
                            • L. Richter, F. Ernst, A. Schlaefer, A. Schweikard (2011), "Robust real-time robot-world calibration for robotized transcranial magnetic stimulation", Int J Med Robot. Vol. 7(4),414-422. [Abstract] [BibTeX] [DOI]
                            • Abstract: Background: For robotized transcranial magnetic stimulation (TMS), the magnetic coil is placed on the patientś head by a robot. As the robotized TMS system requires tracking of head movements, robot and tracking camera need to be calibrated. However, for robotized TMS in a clinical setting, such calibration is required frequently. Mounting/unmounting a marker to the end effector and moving the robot into different poses is impractical. Moreover, if either system is moved during treatment, recalibration is required. Methods: To overcome this limitation, we propose to directly track a marker at link three of the articulated arm. Using forward kinematics and a constant marker transform to link three, the calibration can be performed instantly. Results: Our experimental results indicate an accuracy similar to standard hand-eye calibration approaches. It also outperforms classical hand-held navigated TMS systems. Conclusion: This robust online calibration greatly enhances the systemś user-friendliness and safety
                              BibTeX:
                              @article{Richter2011,
                                author = {L. Richter and F. Ernst and A. Schlaefer and A. Schweikard},
                                title = {Robust real-time robot-world calibration for robotized transcranial magnetic stimulation},
                                journal = {Int J Med Robot},
                                year = {2011},
                                volume = {7},
                                number = {4},
                                pages = {414-422},
                                doi = {10.1002/rcs.411}
                              }
                              
                            • L. Richter, F. Ernst, A. Schlaefer, A. Schweikard (2011), "Robust robot-camera calibration for robotized Transcranial Magnetic Stimulation", The International Journal of Medical Robotics and Computer Assisted Surgery. Vol. 7,414-422. [Abstract] [BibTeX] [DOI]
                            • Abstract: Background: For robotized transcranial magnetic stimulation (TMS), the magnetic coil is placed on the patientś head by a robot. As the robotized TMS system requires tracking of head movements, robot and tracking camera need to be calibrated. However, for robotized TMS in a clinical setting, such calibration is required frequently. Mounting/unmounting a marker to the end effector and moving the robot into different poses is impractical. Moreover, if either system is moved during treatment, recalibration is required. Methods: To overcome this limitation, we propose to directly track a marker at link three of the articulated arm. Using forward kinematics and a constant marker transform to link three, the calibration can be performed instantly. Results: Our experimental results indicate an accuracy similar to standard hand-eye calibration approaches. It also outperforms classical hand-held navigated TMS systems. Conclusion: This robust online calibration greatly enhances the systemś user-friendliness and safety
                              BibTeX:
                              @article{Richter2011a,
                                author = {L. Richter and F. Ernst and A. Schlaefer and A. Schweikard},
                                title = {Robust robot-camera calibration for robotized Transcranial Magnetic Stimulation},
                                journal = {The International Journal of Medical Robotics and Computer Assisted Surgery},
                                year = {2011},
                                volume = {7},
                                pages = {414-422},
                                doi = {10.1002/rcs.411}
                              }
                              
                            • T. V. and N. Rzezovski and A. Schlaefer (2011), "Three-Dimensional Isodose Surface Manipulation for Multi-Criteria Inverse Planning in Radiosurgery", Medical Physics., In Annual Meeting of the AAPM. Vol. 38(3371) [Abstract] [BibTeX] [DOI]
                            • Abstract: Purpose: Traditionally, the planning task for radiotherapy offers the human planner little direct spatial control of the dose distribution. Dose painting methods exist, but they typically suffer from side effects, such as uncontrolled spilling of dose. We developed a tool that allows for local three-dimensional isodose surface manipulation which avoids this problem by employing constrained optimization and inverse planning. Methods: In our approach, the planner operates directly on the three-dimensional dose distribution, e.g., selecting areas to be covered with a certain iso-dose surface. The underlying mathematical model implementing constrained inverse planning prevents dose from shifting into areas that are not explicitly selected. To move towards a desired clinical goal, e.g., coverage of the target, the planner directly controls where to operate trade-offs. First, a local objective is set graphically. Second, the potential trade-offs are visualized. Third, the planner selects where to relax constraints. Notably, the relaxation steps imply quick re-optimization in an interactive manner. We studied the use of the tool for a clinical case, where initially tight bounds on critical structures prevent sufficient target coverage. The dose bounds are then relaxed in specific areas, i.e., shaping the isodose surfaces in a controlled way. Results: Our experiments show that local isodose manipulation is possible, with little to no dose shifting. Outside the local target area, constraints stay in place and maintain the dose distribution. When the isodose surface is deliberately remodeled to relax constraints the some areas, the coverage of the target area is improved. Initial optimization times are below 40 seconds while re-optimization is done in less than 5 seconds. Conclusions: The planning tool implements a novel approach to interactively shape the dose distribution, which would be of particular interest in radiosurgical planning with steep gradients. Our results illustrate that interactive multi-criteria planning in the dose space is feasible.
                              BibTeX:
                              @article{Rzezovskiand A.Schlaefer2011,
                                author = {T. Viulet and N. Rzezovski and A. Schlaefer},
                                title = {Three-Dimensional Isodose Surface Manipulation for Multi-Criteria Inverse Planning in Radiosurgery},
                                booktitle = {Annual Meeting of the AAPM},
                                journal = {Medical Physics},
                                year = {2011},
                                volume = {38},
                                number = {3371},
                                edition = {oin AAPM/COMP Meeting},
                                doi = {10.1118/1.3611475}
                              }
                              
                            • A. Schlaefer (2011), "Treatment Planning for Motion Adaptation in Radiation Therapy", Adaptive Motion Compensation in Radiotherapy., In Adaptive Motion Compensation in Radiotherapy. (CRC Press),Ch8, 65–75. CRC Press. [BibTeX]
                            • BibTeX:
                              @article{Schlaefer2011,
                                author = {A. Schlaefer},
                                title = {Treatment Planning for Motion Adaptation in Radiation Therapy},
                                booktitle = {Adaptive Motion Compensation in Radiotherapy},
                                journal = {Adaptive Motion Compensation in Radiotherapy},
                                publisher = {CRC Press},
                                year = {2011},
                                number = {CRC Press},
                                pages = {Ch8, 65–75}
                              }
                              
                            • A. Schlaefer O. B. (Eds.) (2011), "Robotic Sailing", Proceedings of the 4th International Robotic Sailing Conference., In Proceedings of the 4th International Robotic Sailing Conference. Springer. [BibTeX] [DOI]
                            • BibTeX:
                              @inproceedings{Schlaefer2011d,
                                author = {A. Schlaefer and O. Blaurock (Eds.)},
                                title = {Robotic Sailing},
                                booktitle = {Proceedings of the 4th International Robotic Sailing Conference},
                                journal = {Proceedings of the 4th International Robotic Sailing Conference},
                                publisher = {Springer},
                                year = {2011},
                                doi = {10.1007/978-3-642-22836-0}
                              }
                              
                            • A. Schlaefer, D. Beckmann, M. Heinig, R. Bruder (2011), "A New Class for Robotic Sailing: The Robotic Racing Micro Magic", Robotic Sailing: Proceedings of the 4th International Robotic Sailing Conference., In Robotic Sailing: Proceedings of the 4th International Robotic Sailing Conference. ,71-84. Springer Berlin Heidelberg. [Abstract] [BibTeX] [DOI] [URL]
                            • Abstract: A number of boat designs have been proposed for robotic sailing, particularly inspired by competitions like the Microtransat challenge, SailBot, and the World Robotic Sailing Championship. So far, most of the boats are one-offs, often highlighting naval architecture aspects. We propose a new one-design class based on a readily available kit. Small, lightweight and with proven sailing performance the robotic racing Micro Magic presents a more standardized alternative, particularly for algorithm development and multi-boat scenarios. Our intention is to introduce an evolving class, and we propose a set of basic rules and describe the modified boat design, electronics, sensors and control approach for our prototype. Moreover, we have used four identical boats for the past year and we present results illustrating the good and comparable sailing performance, indicating that the class is suitable to study robotic sailing methods
                              BibTeX:
                              @inbook{Schlaefer2011a,
                                author = {A. Schlaefer and D. Beckmann and M. Heinig and R. Bruder},
                                editor = {A. Schlaefer and O. Blaurock},
                                title = {A New Class for Robotic Sailing: The Robotic Racing Micro Magic},
                                booktitle = {Robotic Sailing: Proceedings of the 4th International Robotic Sailing Conference},
                                journal = {Robotic Sailing: Proceedings of the 4th International Robotic Sailing Conference},
                                publisher = {Springer Berlin Heidelberg},
                                year = {2011},
                                pages = {71-84},
                                url = {https://doi.org/10.1007/978-3-642-22836-0_5},
                                doi = {10.1007/978-3-642-22836-0_5}
                              }
                              
                            • A. Schlaefer S. Dieterich (2011), "Feasibility of case-based beam generation for robotic radiosurgery", Artif Intell Med. Vol. 52(2),67-75. [Abstract] [BibTeX] [DOI]
                            • Abstract: OBJECTIVE: Robotic radiosurgery uses the kinematic flexibility of a robotic arm to target tumors and lesions from many different directions. This approach allows to focus the dose to the target region while sparing healthy surrounding tissue. However, the flexibility in the placement of treatment beams is also a challenge during treatment planning. We study an approach to make the search for treatment beams more efficient by considering previous treatment plans. METHODS AND MATERIAL: Conventionally, a beam generation heuristic based on randomly selected candidate beams has been proven to be most robust in clinical practice. However, for prevalent types of cancer similarities in patient anatomy and dose prescription exist. We present a case-based approach that introduces a problem specific measure of similarity and allows to generate candidate beams from a database of previous treatment plans. Similarity between treatments is established based on projections of the organs and structures considered during planning, and the desired dose distribution. Solving the inverse planning problem a subset of treatment beams is determined and adapted to the new clinical case. RESULTS: Preliminary experimental results indicate that the new approach leads to comparable plan quality for substantially fewer candidate beams. For two prostate cases, the dose homogeneity in the target region and the sparing of critical structures is similar for plans based on 400 and 600 candidate beams generated with the novel and the conventional method, respectively. However, the runtime for solving the inverse planning problem for could be reduced by up to 47%, i.e., from approximately 19 min to less than 11 min. CONCLUSION: We have shown the feasibility of case-based beam generation for robotic radiosurgery. For prevalent clinical cases with similar anatomy the cased-based approach could substantially reduce planning time while maintaining high plan quality
                              BibTeX:
                              @article{Schlaefer2011c,
                                author = {A. Schlaefer and S. Dieterich},
                                title = {Feasibility of case-based beam generation for robotic radiosurgery},
                                journal = {Artif Intell Med},
                                year = {2011},
                                volume = {52},
                                number = {2},
                                pages = {67-75},
                                doi = {10.1016/j.artmed.2011.04.008}
                              }
                              
                            • A. Schlaefer, C. Otte, F. Noack, M. Sommerauer, G. Hüttmann, G. Kovacs (2011), "Preliminary study on optical coherence tomography for brachytherapy guidance", In 4th International Symposium on Focal Therapy and Imaging in Prostate & Kidney Cancer. [Abstract] [BibTeX] [URL]
                            • Abstract: Preliminary study on optical coherence tomography for brachytherapy guidance Focal therapy requires precise localization of the target, and a navigated treatment. Transrectal ultrasound (TRUS) remains the preferable image modality for the prostate, although needles inserted during brachy-therapy can cause substantial artifacts. Yet, the needles penetrate prostate tissue and could be used for high resolution imaging from within the target region (Fig. 1a). We study how optical coherence tomography (OCT) images can be obtained and mapped against histology data.
                              BibTeX:
                              @inproceedings{Schlaefer2011b,
                                author = {A. Schlaefer and C. Otte and F. Noack and M. Sommerauer and G. Hüttmann and G. Kovacs},
                                title = {Preliminary study on optical coherence tomography for brachytherapy guidance},
                                booktitle = {4th International Symposium on Focal Therapy and Imaging in Prostate & Kidney Cancer},
                                year = {2011},
                                url = {http://www.epostersonline.com/focther2011/?q=node/1425}
                              }
                              
                            • O. Shahin, V. Martens, A. Beširevic, M. Kleemann, A. Schlaefer (2011), "Intraoperative tumor localization in laparoscopic liver surgery", Proceedings of the Joint Workshop on New Technologies for Computer/Robot Assisted Surgery., In Proceedings of the Joint Workshop on New Technologies for Computer/Robot Assisted Surgery. [BibTeX]
                            • BibTeX:
                              @inproceedings{Shahin2011,
                                author = {O. Shahin and V. Martens and A. Beširevic and M. Kleemann and A. Schlaefer},
                                title = {Intraoperative tumor localization in laparoscopic liver surgery},
                                booktitle = {Proceedings of the Joint Workshop on New Technologies for Computer/Robot Assisted Surgery},
                                journal = {Proceedings of the Joint Workshop on New Technologies for Computer/Robot Assisted Surgery},
                                year = {2011}
                              }
                              
                            • B. Stender, M. Scharfschwerdt, F. Ernst, R. Bruder, H. Hadjar, A. Schlaefer (2011), "Optical Imaging of Cardiac Function: System setup and calibration", Proceedings of the 25th International Congress and Exhibition on Computer Assisted Radiology and Surgery (CARS 11)., In Proceedings of the 25th International Congress and Exhibition on Computer Assisted Radiology and Surgery (CARS 11). Vol. 6(1),42-43. [BibTeX]
                            • BibTeX:
                              @inproceedings{Stender2011,
                                author = {B. Stender and M. Scharfschwerdt and F. Ernst and R. Bruder and H. Hadjar and A. Schlaefer},
                                title = {Optical Imaging of Cardiac Function: System setup and calibration},
                                booktitle = {Proceedings of the 25th International Congress and Exhibition on Computer Assisted Radiology and Surgery (CARS 11)},
                                journal = {Proceedings of the 25th International Congress and Exhibition on Computer Assisted Radiology and Surgery (CARS 11)},
                                year = {2011},
                                volume = {6},
                                number = {1},
                                pages = {42-43}
                              }
                              
                            • T. Viulet, N. Rzezovski, A. Schlaefer (2011), "Towards interactive planning for radiotherapy by three-dimensional iso-dose manipulation", Proceedings of the 25th International Conference and Exhibition on Computer Assisted Radiology and Surgery (CARS'11)oin AAPM/COMP Meeting., In Proceedings of the 25th International Conference and Exhibition on Computer Assisted Radiology and Surgery (CARS'11) on AAPM/COMP Meeting. [BibTeX]
                            • BibTeX:
                              @conference{Viulet2011,
                                author = {T. Viulet and N. Rzezovski and A. Schlaefer},
                                title = {Towards interactive planning for radiotherapy by three-dimensional iso-dose manipulation},
                                booktitle = {Proceedings of the 25th International Conference and Exhibition on Computer Assisted Radiology and Surgery (CARS'11) on AAPM/COMP Meeting},
                                journal = {Proceedings of the 25th International Conference and Exhibition on Computer Assisted Radiology and Surgery (CARS'11)oin AAPM/COMP Meeting},
                                year = {2011}
                              }
                              

                              2010

                              • N. Ammann, R. Biemann, F. Hartmann, C. Hauft, I. Heinecke, P. Jauer, J. Krüger, T.Meyer, R. Bruder, A. Schlaefer (2010), "Towards autonomous one-design sailboat racing: navigation, communication and collision avoidance" [BibTeX]
                              • BibTeX:
                                @conference{Ammann2010a,
                                  author = {N. Ammann and R. Biemann and F. Hartmann and C. Hauft and I. Heinecke and P. Jauer and J. Krüger and T.Meyer and R. Bruder and A. Schlaefer},
                                  title = {Towards autonomous one-design sailboat racing: navigation, communication and collision avoidance},
                                  year = {2010}
                                }
                                
                              • N. Ammann, F. Hartmann, P. Jauer, R. Bruder, A. Schlaefer (2010), "Design of a robotic sailboat for WRSC/SailBot", International Robotic Sailing Conference. ,40-42. [BibTeX]
                              • BibTeX:
                                @conference{Ammann2010,
                                  author = {N. Ammann and F. Hartmann and P. Jauer and R. Bruder and A. Schlaefer},
                                  title = {Design of a robotic sailboat for WRSC/SailBot},
                                  journal = {International Robotic Sailing Conference},
                                  year = {2010},
                                  pages = {40-42}
                                }
                                
                              • F. Ernst, R. Bruder, M. Pohl, A. Schlaefer, A. Schweikard (2010), "Prediction of Cardiac Motion", Proceedings of the 24th International Conference and Exhibition on Computer Assisted Radiology and Surgery (CARS'10)., In Proceedings of the 24th International Conference and Exhibition on Computer Assisted Radiology and Surgery (CARS'10). Vol. 5(1),273-274. [BibTeX]
                              • BibTeX:
                                @conference{Ernst2010a,
                                  author = {F. Ernst and R. Bruder and M. Pohl and A. Schlaefer and A. Schweikard},
                                  title = {Prediction of Cardiac Motion},
                                  booktitle = {Proceedings of the 24th International Conference and Exhibition on Computer Assisted Radiology and Surgery (CARS'10)},
                                  journal = {Proceedings of the 24th International Conference and Exhibition on Computer Assisted Radiology and Surgery (CARS'10)},
                                  year = {2010},
                                  volume = {5},
                                  number = {1},
                                  pages = {273-274}
                                }
                                
                              • F. Ernst, R. Bruder, A. Schlaefer, A. Schweikard (2010), "Improving the quality of biomedical signal tracking using prediction algorithms", UKACC International Conference on CONTROL 2010., In UKACC International Conference on CONTROL 2010., September, 2010. ,301-305. [Abstract] [BibTeX] [DOI] [URL]
                              • Abstract: The use of optical and magnetical tracking systems is widely spread throughout modern operating theatres. One thing that is not taken into account so far is the fact that all systems which need to make two or more sequential measurements to determine an objectś pose will exhibit systematic measurement errors. These errors can be attributed on the nonsimultaneous acquisition process. We have analysed this problem for the atracsys accuTrack system which is an optical tracking system using three line cameras. Using robotised and manual experiments we found that, using a marker with four LEDs at a single LED acquisition rate of 331.04 Hz, these errors can be as much as 1.4 mm and 2.1° (RMS). With lower acquisiton rates which are commonplace in other tracking systems-these errors are expected to be even higher. Using the proposed compensation methods, they may be reduced to as little as 0.2 mm and 0.6° (RMS), respectively.
                                BibTeX:
                                @conference{Ernst2010,
                                  author = {F. Ernst and R. Bruder and A. Schlaefer and A. Schweikard},
                                  title = {Improving the quality of biomedical signal tracking using prediction algorithms},
                                  booktitle = {UKACC International Conference on CONTROL 2010},
                                  journal = {UKACC International Conference on CONTROL 2010},
                                  year = {2010},
                                  pages = {301-305},
                                  url = {http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6490756},
                                  doi = {10.1049/ic.2010.0298}
                                }
                                
                              • F. Ernst, B. Stender, A. Schlaefer, A. Schweikard (2010), "Using ECG in Motion Prediction for Radiosurgery of the Beating Heart", The Hamlyn Symposium on Medical Robotics., In The Hamlyn Symposium on Medical Robotics. The Royal Society, London Vol. 3,37-38. [BibTeX]
                              • BibTeX:
                                @inproceedings{Ernst2010b,
                                  author = {F. Ernst and B. Stender and A. Schlaefer and A. Schweikard},
                                  title = {Using ECG in Motion Prediction for Radiosurgery of the Beating Heart},
                                  booktitle = {The Hamlyn Symposium on Medical Robotics},
                                  journal = {The Hamlyn Symposium on Medical Robotics},
                                  year = {2010},
                                  volume = {3},
                                  pages = {37-38}
                                }
                                
                              • M. Finke, B. Stender, R. Bruder, A. Schlaefer, A. Schweikard (2010), "An experimental comparison of control devices for automatic movements of a surgical microscope", In Computer Assisted Radiology and Surgery (CARS). [BibTeX]
                              • BibTeX:
                                @conference{Finke2010,
                                  author = {M. Finke and B. Stender and R. Bruder and A. Schlaefer and A. Schweikard},
                                  title = {An experimental comparison of control devices for automatic movements of a surgical microscope},
                                  booktitle = {Computer Assisted Radiology and Surgery (CARS)},
                                  year = {2010}
                                }
                                
                              • C. Fürweger, C. Drexler, M. Kufeld, A. Muacevic, B. Wowra, A. Schlaefer (2010), "Patient motion and targeting accuracy in robotic spinal radiosurgery: 260 single-fraction fiducial-free cases", Int J Radiat Oncol Biol Phys., In Int J Radiat Oncol Biol Phys. Vol. 78(3),937-945. [Abstract] [BibTeX] [DOI]
                              • Abstract: Purpose: The CyberKnife™ compensates translational target motion by moving the beams synchronously. While the system was found to operate with sub-millimeter accuracy in phantoms, determining the clinical accuracy is challenging. Measuring the delivered dose distribution inside a patient is impractical. Hence an analysis of treatment data is typically used to estimate residual errors. Methods: We implant 3-5 fiducials for target tracking and treat livertumors in 3-5 fractions with 45Gy at 80% to the PTV (CTV+3mm). Patients are aligned based on X-ray images in expiration breath hold. During delivery, X-ray images are acquired every 60-90s, and the translational and rotational misalignment is computed. We grouped this data into 10 respiratory phases. The mean misalignment for each phase was used to simulate the translation and rotation of the target with respect to the alignment center. The resulting dose distribution was computed and compared to the planned dose. Additionally, the quality of motion prediction was evaluated. Results: We analyzed 5 cases with a total of 17 fractions. The maximal target motion per fraction ranged from 9.2mm to 25.7mm (3D trajectory). The mean error for each patient ranged from -0.76/-0.01/- 0.32mm to 0.35/0.17/0.10mm (Translation IS/LR/AP) and -0.94/-0.82/-2.07 degrees to 0.24/1.95/2.36 degrees (Rotation roll/pitch/yaw). The dose simulation showed point dose difference for each patient ranging from - 0.10Gy to -0.76Gy (Mean) and -1.13Gy to -5.05Gy (Max). The resulting reduction in coverage ranged from 0.37% to 4.19% (PTV) and -0.43% to +0.94% (CTV). Finally, the mean prediction error over all fractions was 0.33mm. Conclusions: We demonstrated that while maximum point dose differences can be considerable, the coverage of the CTV is maintained even in the presence of substantial respiratory motion. The results indicate that the standard 3mm system uncertainty margin can account for errors due to rotation and deformation during roboticradiosurgery for tumors in the liver
                                BibTeX:
                                @article{Fuerweger2010,
                                  author = {C. Fürweger and C. Drexler and M. Kufeld and A. Muacevic and B. Wowra and A. Schlaefer},
                                  title = {Patient motion and targeting accuracy in robotic spinal radiosurgery: 260 single-fraction fiducial-free cases},
                                  booktitle = {Int J Radiat Oncol Biol Phys},
                                  journal = {Int J Radiat Oncol Biol Phys},
                                  year = {2010},
                                  volume = {78},
                                  number = {3},
                                  pages = {937-945},
                                  doi = {10.1016/j.ijrobp.2009.11.030}
                                }
                                
                              • M. Heinig, R. Bruder, A. Schlaefer, A. Schweikard (2010), "3D localization of a thin steel rod using magnetic field sensors: feasibility and preliminary results", 4th Int. Conf. Bioinformatics and Biomedical Engineering (iCBBE), 2010., In 4th Int. Conf. Bioinformatics and Biomedical Engineering (iCBBE), 2010. Chengdu ,1-4. [Abstract] [BibTeX] [DOI] [URL]
                              • Abstract: We present the design, setup and preliminary results for a navigation system based on magnetic field sensors. Our system localizes the tip of a magnetized steel rod with diameter 0.5 mm in a cubic workspace with 30 mm edge length. We plan to localize electrodes and probes during surgeries, e.g. for small animal research like neurosurgery in rats. Only the static magnetic field of the steel rod is needed for localization. Our navigation system does not need any external excitation, wires or alternating magnetic fields. Hence, we avoid undesirable stimulation of the animalś brain and we are able to realize small (0.5 mm) probe diameters to reduce brain damage. Localization of the steel rodś tip is achieved using a nearest neighbor approach. The currently measured sensor values are compared to data stored in a previously generated lookup table. An industrial robot is used to create the lookup table and later to validate the accuracy of the system. Currently, the system has 3 degrees of freedom (DOF). Mean of the difference between true and determined position is -0.53; 0.31; -0.95 [mm] with a standard deviation of 1.13; 1.24; 0.99 [mm] in XYZ, or lower. The influence of different noise sources, e.g. electric currents or metal, on the performance of the system are discussed
                                BibTeX:
                                @inproceedings{Heinig2010,
                                  author = {M. Heinig and R. Bruder and A. Schlaefer and A. Schweikard},
                                  title = {3D localization of a thin steel rod using magnetic field sensors: feasibility and preliminary results},
                                  booktitle = {4th Int. Conf. Bioinformatics and Biomedical Engineering (iCBBE), 2010},
                                  journal = {4th Int. Conf. Bioinformatics and Biomedical Engineering (iCBBE), 2010},
                                  year = {2010},
                                  pages = {1-4},
                                  url = {http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=5515423},
                                  doi = {10.1109/ICBBE.2010.5515423}
                                }
                                
                              • M. Heinig, A. Schlaefer, A. Schweikard (2010), "3D localization of ferromagnetic probes for small animal neurosurgery", Engineering in Medicine and Biology Society (EMBC), 2010 Annual International Conference of the IEEE. ,2321-2324. [Abstract] [BibTeX] [DOI] [URL]
                              • Abstract: We present the design, setup and results for a magnetic navigation system for small animal stereotactic neurosurgery. Our system tracks the position of thin (diameter 0.5 mm), magnetized ferromagnetic probes inserted into brains of small animals, e.g. rats, for electrophysiological recordings. It is used in combination with the spherical assistant for stereotactic surgery (SASSU) robot to obtain online feedback of the probeś position. Navigation is based only on the static magnetic field generated by the probes thus no external excitation or wires are needed. The magnetic field created by the probe is measured by three sensors and compared to data of a previously generated lookup table. To account for overlaying magnetic fields (e.g earthś field), we determine and adjust for the magnetic background. A nearest neighbor approach is used to identify the best element of the lookup table. The actual position of the probe is found using trilinear interpolation between the best element and its neighbors. To validate the system, the workspace was filled with gelatin to simulate brain-like, organic structure. Next, several positions were approached by the robot. The difference between the ground truth position and the position determined by the system was calculated. We found that the norm of the mean values are between 0.09 mm and 0.64 mm with a norm of the standard deviation between 0.52 mm and 0.80 mm. No substantial difference between gelatin and non-gelatin data was observed. Our approach allows the online validation of the probeś position in X-,Y and Z-axis. We conclude that accurate localization of small ferromagnetic objects is feasible with our system. Currently, we are working on further applications including the use in human surgery, e.g. dermatology
                                BibTeX:
                                @inproceedings{Heinig2010a,
                                  author = {M. Heinig and A. Schlaefer and A. Schweikard},
                                  title = {3D localization of ferromagnetic probes for small animal neurosurgery},
                                  journal = {Engineering in Medicine and Biology Society (EMBC), 2010 Annual International Conference of the IEEE},
                                  year = {2010},
                                  pages = {2321-2324},
                                  url = {http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=5627435},
                                  doi = {10.1109/IEMBS.2010.5627435}
                                }
                                
                              • V. Martens, O. Shahin, A. Beširevic, A. Schlaefer (2010), "A combined surface and ultrasound image approach for registration in laparoscopic liver surgery", In Proceedings of the 24th International Conference and Exhibition on Computer Assisted Radiology and Surgery (CARS 2010)., June, 2010. Vol. 5(1),285-287. [BibTeX]
                              • BibTeX:
                                @conference{Martens2010,
                                  author = {V. Martens and O. Shahin and A. Beširevic and A. Schlaefer},
                                  title = {A combined surface and ultrasound image approach for registration in laparoscopic liver surgery},
                                  booktitle = {Proceedings of the 24th International Conference and Exhibition on Computer Assisted Radiology and Surgery (CARS 2010)},
                                  year = {2010},
                                  volume = {5},
                                  number = {1},
                                  pages = {285-287}
                                }
                                
                              • C. Otte, R. Ansari, G. Hüttmann, A. Schlaefer (2010), "Bildverarbeitung für die OCT-Navigation in Weichgewebe", In Jahrestagung der Deutschen Gesellschaft für Biomedizinische Technik (BMT). [BibTeX]
                              • BibTeX:
                                @conference{Otte2010,
                                  author = {C. Otte and R. Ansari and G. Hüttmann and A. Schlaefer},
                                  title = {Bildverarbeitung für die OCT-Navigation in Weichgewebe},
                                  booktitle = {Jahrestagung der Deutschen Gesellschaft für Biomedizinische Technik (BMT)},
                                  year = {2010}
                                }
                                
                              • L. Richter, R. Bruder, A. Schlaefer (2010), "Proper Force-Torque Sensor System for robotized TMS: automatic coil calibration", Proceedings of the 24th International Conference and Exhibition on Computer Assisted Radiology and Surgery (CARS'10)., In Proceedings of the 24th International Conference and Exhibition on Computer Assisted Radiology and Surgery (CARS'10). Geneva, Switzerland Vol. 5(1),422-423. [BibTeX]
                              • BibTeX:
                                @inproceedings{Richter2010b,
                                  author = {L. Richter and R. Bruder and A. Schlaefer},
                                  title = {Proper Force-Torque Sensor System for robotized TMS: automatic coil calibration},
                                  booktitle = {Proceedings of the 24th International Conference and Exhibition on Computer Assisted Radiology and Surgery (CARS'10)},
                                  journal = {Proceedings of the 24th International Conference and Exhibition on Computer Assisted Radiology and Surgery (CARS'10)},
                                  year = {2010},
                                  volume = {5},
                                  number = {1},
                                  pages = {422-423}
                                }
                                
                              • L. Richter, R. Bruder, A. Schlaefer, A. Schweikard (2010), "Towards direct head navigation for robot-guided transcranial magnetic stimulation using 3D laserscans: idea, setup and feasibility", Conf Proc IEEE Eng Med Biol Soc., In Annual International Conference of the IEEE Engineering in Medicine and Biology Society. ,2283-2286. [Abstract] [BibTeX] [DOI]
                              • Abstract: Direct tracking is more robust than tracking that is based on additional markers. 3D laser scans can be used for direct tracking because they result in a 3D data set of surface points of the scanned object. For head-navigated robotized systems, it is crucial to know where the patientś head is positioned relatively to the robot. We present a novel method to use a 3D laserscanner for direct head navigation in the robotized TMS system that places a coil on the patientś head using an industrial robot. First experimental results showed a translational error <= 2mm in the robot hand-eye-calibration with the laserscanner. The rotational error was 0.75° and the scaling error <= 0.001. Furthermore, we found that the error of a scanned head to a reference head image was <= 0.2mm using ICP. These results have shown that a direct head navigation is feasible for the robotized TMS system. Additional effort has to be made in future systems to speed up the compution time for real time capability
                                BibTeX:
                                @inproceedings{Richter2010a,
                                  author = {L. Richter and R. Bruder and A. Schlaefer and A. Schweikard},
                                  title = {Towards direct head navigation for robot-guided transcranial magnetic stimulation using 3D laserscans: idea, setup and feasibility},
                                  booktitle = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society},
                                  journal = {Conf Proc IEEE Eng Med Biol Soc},
                                  year = {2010},
                                  pages = {2283-2286},
                                  doi = {10.1109/IEMBS.2010.5627660}
                                }
                                
                              • L. Richter, L. Matthäus, A. Schlaefer, A. Schweikard (2010), "Fast robotic compensation of spontaneous head motion during Transcranial Magnetic Stimulation (TMS)", UKACC International Conference on CONTROL 2010., In UKACC International Conference on CONTROL 2010. ,1-6. [Abstract] [BibTeX] [DOI] [URL]
                              • Abstract: As Transcranial Magnetic Stimulation (TMS) is spreading fast in neurology and neuroscience, advanced techniques for TMS are required. A robotized system is used for precise and repeatable TMS. The system is based on a serial industrial robot and an infrared tracking system for permanent position tracing of the head. For enhanced precision, a motion compensation module counterbalances head motions during stimulation. This avoids rigid fixations of the patient and leads to increased convenience and significant stress reduction. The motion compensation deals with two main scenarios: While the robot approaches the target point, the trajectory has to be adapted when the head moves. Once the point is reached, the robot keeps the coil at the given target position relative to the head. For safe robot operations around the patientś head, a metric is used that restricts big joint changes. Furthermore, a running average is used to compensate jitter in the tracking measurements. As a specific extension of the motion compensation method for TMS, an online coil pose adaptation and a manual coil placement are integrated. Our results have shown that the motion compensation latency is about 110 ms. The associated compensation is about 200 ms. Hence, the original position relative to the head will be re-established within about 300 ms. Our recent clinical trials for rTMS have practically proved that the motion compensation method is sufficient for medical applications
                                BibTeX:
                                @inproceedings{Richter2010,
                                  author = {L. Richter and L. Matthäus and A. Schlaefer and A. Schweikard},
                                  title = {Fast robotic compensation of spontaneous head motion during Transcranial Magnetic Stimulation (TMS)},
                                  booktitle = {UKACC International Conference on CONTROL 2010},
                                  journal = {UKACC International Conference on CONTROL 2010},
                                  year = {2010},
                                  pages = {1-6},
                                  url = {http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6490854},
                                  doi = {10.1049/ic.2010.0396}
                                }
                                
                              • A. Schlaefer, C. Otte, R. Ansari, G. Hüttmann, L. Richter, R. Bruder, M. Heinig, M. Sommerauer, G. Kovacs (2010), "Towards high resolution image guided navigation for prostate brachytherapy", International Journal of Computer Assisted Radiology and Surgery., In Proceedings of the 24th International Conference and Exhibition on Computer Assisted Radiology and Surgery (CARS'10). Geneva, Switzerland, June, 2010. Vol. 5(1),24-25. [BibTeX]
                              • BibTeX:
                                @inproceedings{Schlaefer2010b,
                                  author = {A. Schlaefer and C. Otte and R. Ansari and G. Hüttmann and L. Richter and R. Bruder and M. Heinig and M. Sommerauer and G. Kovacs},
                                  title = {Towards high resolution image guided navigation for prostate brachytherapy},
                                  booktitle = {Proceedings of the 24th International Conference and Exhibition on Computer Assisted Radiology and Surgery (CARS'10)},
                                  journal = {International Journal of Computer Assisted Radiology and Surgery},
                                  year = {2010},
                                  volume = {5},
                                  number = {1},
                                  pages = {24-25}
                                }
                                
                              • A. Schlaefer A. Schweikard (2010), "Robotiksysteme für die Radiochirurgie", Computerassistierte Chirurgie., In Computerassistierte Chirurgie. (Kapitel 16) Elsevier. [Abstract] [BibTeX] [URL]
                              • Abstract: Das Buch stellt die computerassistierte Chirurgie erstmals in deutscher Sprache grundlegend und umfassend dar. Im ersten Teil werden alle wesentlichen Grundlagen über alle chirurgischen Fachgebiete hinweg behandelt, u.a. Visualisierung, computerassistierte Chirurgieplanung, Lokalisierungssysteme, roboterassistierte minimal-invasive Chirurgie, Analyse und Beschreibung chirurgischer Workflows und Bewertung der Mensch-Maschine-Interaktion. Der zweite Teil beschreibt die klinischen Fragestellungen, Lösungsansätze und bisherigen Erfahrungen in den jeweiligen chirurgischen Fachgebieten. Der Code im Buch schaltet zusätzliche Inhalte im Internet frei: Videos zur virtuellen OP-Planung und zur klinischen Anwendung
                                BibTeX:
                                @inbook{Schlaefer2010a,
                                  author = {A. Schlaefer and A. Schweikard},
                                  editor = {Peter Michael Schlag, Sebastian Eulenstein, Thomas Lange (Hrsg.)},
                                  title = {Robotiksysteme für die Radiochirurgie},
                                  booktitle = {Computerassistierte Chirurgie},
                                  journal = {Computerassistierte Chirurgie},
                                  publisher = {Elsevier},
                                  year = {2010},
                                  number = {Kapitel 16},
                                  url = {https://www.123library.org/ebook/isbn/9783437593260/}
                                }
                                
                              • A. Schlaefer T. Wolschon (2010), "Optimizing the Trade-Off Between Number of Beam Starting Points and Plan Quality in Robotic Radiosurgery", XVIth International Conference on the Use of Computers in Radiation Therapy (ICCR)., In XVIth International Conference on the Use of Computers in Radiation Therapy (ICCR). [BibTeX]
                              • BibTeX:
                                @conference{Schlaefer2010,
                                  author = {A. Schlaefer and T. Wolschon},
                                  title = {Optimizing the Trade-Off Between Number of Beam Starting Points and Plan Quality in Robotic Radiosurgery},
                                  booktitle = {XVIth International Conference on the Use of Computers in Radiation Therapy (ICCR)},
                                  journal = {XVIth International Conference on the Use of Computers in Radiation Therapy (ICCR)},
                                  year = {2010}
                                }
                                
                              • B. Stender A. Schlaefer (2010), "Handling and detecting motion in electrical impedance tomography" [BibTeX]
                              • BibTeX:
                                @conference{Stender2010,
                                  author = {B. Stender and A. Schlaefer},
                                  title = {Handling and detecting motion in electrical impedance tomography},
                                  year = {2010}
                                }
                                

                                2009

                                • R. Bruder, F. Ernst, A. Schlaefer, A. Schweikard (2009), "TH-C-304A-07: Real-Time Tracking of the Pulmonary Veins in 3D Ultrasound of the Beating Heart", Medical Physics. Vol. 36(6),2804-2804. Medical Physics. [Abstract] [BibTeX] [DOI] [URL]
                                • Abstract: Purpose: Currently effort is taken to use radiation therapy to cure heart diseases like arrhythmia. This approach requires high accuracy localisation and tracking of the pulmonary veins. Because of the high speed of motion of the heart, fluoroscopic tracking of fiducials or anatomical structures as in IGRT would on the one hand require high frame rates and, on the other hand, it would be dangerous to place fiducials near the target. We propose to use live 3D ultrasound to perform the landmark localization and tracking. Methods and materials: We have modified a GE Vivid7 dimension 3D cardiovascular ultrasound station for real-time volume processing and target localisation. It is capable of providing ultrasound volume scans of the target region with more than 20 fps. A framework was established to upload and run image-processing algorithms directly on the ultrasound machine which is necessary to handle the high amount of data. This prevents the bottleneck of Ethernet data streaming and external processing. We propose to localise the pulmonary veins using a template matching algorithm with multiple templates. Approximately 20 templates are manually generated during one heart beat cycle. To increase the speed and accuracy of the matching process, electrical pulse signals were recorded by the ultrasound station. This allows selecting two or three pulse?dependent templates in the live matching stage. Results: The accuracy of the localization process is highly dependent on the templates chosen. The best results were achieved providing a full heart cycle as template data. As a compromise between speed and accuracy, we used 9×9×9 points as template, corresponding to 4.5×4.5×4.5mm3. Conclusion: The presented approach is a new and robust approach to semi-automatically track small substructures in the beating heart. Furthermore, the generated signal is suitable as input to numerous prediction algorithms currently used to compensate for breathing motion in radiosurgery
                                  BibTeX:
                                  @article{Bruder2009,
                                    author = {R. Bruder and F. Ernst and A. Schlaefer and A. Schweikard},
                                    title = {TH-C-304A-07: Real-Time Tracking of the Pulmonary Veins in 3D Ultrasound of the Beating Heart},
                                    journal = {Medical Physics},
                                    publisher = {Medical Physics},
                                    year = {2009},
                                    volume = {36},
                                    number = {6},
                                    pages = {2804-2804},
                                    url = {http://scitation.aip.org/content/aapm/journal/medphys/36/6/10.1118/1.3182643},
                                    doi = {10.1118/1.3182643}
                                  }
                                  
                                • R. Bruder, F. Ernst, A. Schlaefer, A. Schweikard (2009), "THC-C-305A-07: Real-time PV tracking in 3D ultrasound of the beating heart", Medical Physics., In Medical Physics. Vol. 36(2804),1116-1118. [Abstract] [BibTeX] [DOI]
                                • Abstract: Purpose: Currently effort is taken to use radiation therapy to cure heart diseases like arrhythmia. This approach requires high accuracy localisation and tracking of the pulmonary veins. Because of the high speed of motion of the heart, fluoroscopic tracking of fiducials or anatomical structures as in IGRT would on the one hand require high frame rates and, on the other hand, it would be dangerous to place fiducials near the target. We propose to use live 3D ultrasound to perform the landmark localization and tracking. Methods and materials: We have modified a GE Vivid7 dimension 3D cardiovascular ultrasound station for real-time volume processing and target localisation. It is capable of providing ultrasound volume scans of the target region with more than 20 fps. A framework was established to upload and run image-processing algorithms directly on the ultrasound machine which is necessary to handle the high amount of data. This prevents the bottleneck of Ethernet data streaming and external processing. We propose to localise the pulmonary veins using a template matching algorithm with multiple templates. Approximately 20 templates are manually generated during one heart beat cycle. To increase the speed and accuracy of the matching process, electrical pulse signals were recorded by the ultrasound station. This allows selecting two or three pulse-dependent templates in the live matching stage. Results: The accuracy of the localization process is highly dependent on the templates chosen. The best results were achieved providing a full heart cycle as template data. As a compromise between speed and accuracy, we used 9¡Á9¡Á9 points as template, corresponding to 4.5¡Á4.5¡Á4.5mm3. Conclusion: The presented approach is a new and robust approach to semi-automatically track small substructures in the beating heart. Furthermore, the generated signal is suitable as input to numerous prediction algorithms currently used to compensate for breathing motion in radiosurgery.
                                  BibTeX:
                                  @article{Bruder2009b,
                                    author = {R. Bruder and F. Ernst and A. Schlaefer and A. Schweikard},
                                    title = {THC-C-305A-07: Real-time PV tracking in 3D ultrasound of the beating heart},
                                    booktitle = {Medical Physics},
                                    journal = {Medical Physics},
                                    year = {2009},
                                    volume = {36},
                                    number = {2804},
                                    pages = {1116-1118},
                                    doi = {10.1118/1.3182643}
                                  }
                                  
                                • R. Bruder, B. Stender, A. Schlaefer (2009), "Model Sailboats as a Testbed for Artificial Intelligence Methods", Proceedings of the 3rd International Robotics Sailing Conference., In International Robotic Sailing Conference. Kingston [BibTeX]
                                • BibTeX:
                                  @conference{Bruder2009a,
                                    author = {R. Bruder and B. Stender and A. Schlaefer},
                                    title = {Model Sailboats as a Testbed for Artificial Intelligence Methods},
                                    booktitle = {International Robotic Sailing Conference},
                                    journal = {Proceedings of the 3rd International Robotics Sailing Conference},
                                    year = {2009}
                                  }
                                  
                                • G. G. Echner, W. Kilby, M. Lee, E. Earnst, S. Sayeh, A. Schlaefer, B. Rhein, J. R. Dooley, C. Lang, O. Blanck, E. Lessard, C. R. M. Jr, W. Schlegel (2009), "The design, physical properties and clinical utility of an iris collimator for robotic radiosurgery", Phys Med Biol. Vol. 54(18),5359-5380. [Abstract] [BibTeX] [DOI]
                                • Abstract: Robotic radiosurgery using more than one circular collimator can improve treatment plan quality and reduce total monitor units (MU). The rationale for an iris collimator that allows the field size to be varied during treatment delivery is to enable the benefits of multiple-field-size treatments to be realized with no increase in treatment time due to collimator exchange or multiple traversals of the robotic manipulator by allowing each beam to be delivered with any desired field size during a single traversal. This paper describes the Iris variable aperture collimator (Accuray Incorporated, Sunnyvale, CA, USA), which incorporates 12 tungsten-copper alloy segments in two banks of six. The banks are rotated by 30 degrees with respect to each other, which limits the radiation leakage between the collimator segments and produces a 12-sided polygonal treatment beam. The beam is approximately circular, with a root-mean-square (rms) deviation in the 50% dose radius of <0.8% (corresponding to <0.25 mm at the 60 mm field size) and an rms variation in the 20-80% penumbra width of about 0.1 mm at the 5 mm field size increasing to about 0.5 mm at 60 mm. The maximum measured collimator leakage dose rate was 0.07%. A commissioning method is described by which the average dose profile can be obtained from four profile measurements at each depth based on the periodicity of the isodose line variations with azimuthal angle. The penumbra of averaged profiles increased with field size and was typically 0.2-0.6 mm larger than that of an equivalent fixed circular collimator. The aperture reproducibility is < or =0.1 mm at the lower bank, diverging to < or =0.2 mm at a nominal treatment distance of 800 mm from the beam focus. Output factors (OFs) and tissue-phantom-ratio data are identical to those used for fixed collimators, except the OFs for the two smallest field sizes (5 and 7.5 mm) are considerably lower for the Iris Collimator. If average collimator profiles are used, the assumption of circular symmetry results in dose calculation errors that are <1 mm or <1% for single beams across the full range of field sizes; errors for multiple non-coplanar beam treatment plans are expected to be smaller. Treatment plans were generated for 19 cases using the Iris Collimator (12 field sizes) and also using one and three fixed collimators. The results of the treatment planning study demonstrate that the use of multiple field sizes achieves multiple plan quality improvements, including reduction of total MU, increase of target volume coverage and improvements in conformality and homogeneity compared with using a single field size for a large proportion of the cases studied. The Iris Collimator offers the potential to greatly increase the clinical application of multiple field sizes for robotic radiosurgery
                                  BibTeX:
                                  @article{Echner2009,
                                    author = {G. G. Echner and W. Kilby and M. Lee and E. Earnst and S. Sayeh and A. Schlaefer and B. Rhein and J. R. Dooley and C. Lang and O. Blanck and E. Lessard and C. R. Maurer Jr and W. Schlegel},
                                    title = {The design, physical properties and clinical utility of an iris collimator for robotic radiosurgery},
                                    journal = {Phys Med Biol},
                                    year = {2009},
                                    volume = {54},
                                    number = {18},
                                    pages = {5359-5380},
                                    doi = {10.1088/0031-9155/54/18/001}
                                  }
                                  
                                • C. Fürweger, M. Kufeld, A. Schlaefer, C. Drexler (2009), "Fiducial-free spinal radiosurgery: Patient motion and targeting accuracy in 227 single fraction treatments with the Cyberknife", World Congress on Medical Physics and Biomedical Engineering IFMBE Proceedings., September, 2009. Vol. 25(1),277-280. [Abstract] [BibTeX]
                                • Abstract: Objective: To evaluate clinical targeting precision in fiducial-free spinal treatments with a robotic radiosurgery system (Cyberknife). Methods: For assessment of spine tracking system performance, we conducted phantom tracking tests on cervical and thoracic vertebrae. We retrospectively evaluated intrafraction patient movement for cervical (47), thoracic (90) and lumbar (90) treatments. A conservative measure for the expected targeting error due to patient motion was derived. Results: The phantom tests show that spinal targets are detected with an accuracy of <0.2 mm for the translational and <0.3° for the rotational directions. The mean targeting error per beam due to residual patient motion is determined to be as low as 0.28+/-0.13 mm (X), 0.25+/-0.15 mm (Y), 0.19+/-0.11 mm (Z) for translational shifts and 0.40+/-0.20° (roll), 0.20+/- 0.08° (pitch) and 0.19+/-0.08° (yaw) for rotations. Interestingly, the tracked spine section is of little significance for the overall targeting error due to motion, which is below 1 mm for more than 95% of our spinal treatments (median: 0.48 mm). Conclusions: We could demonstrate that image-guided spinal radiosurgery with the Cyberknife allows for submillimeter precision in treatment delivery despite of patient motion
                                  BibTeX:
                                  @conference{Fuerweger2009a,
                                    author = {C. Fürweger and M. Kufeld and A. Schlaefer and C. Drexler},
                                    title = {Fiducial-free spinal radiosurgery: Patient motion and targeting accuracy in 227 single fraction treatments with the Cyberknife},
                                    journal = {World Congress on Medical Physics and Biomedical Engineering IFMBE Proceedings},
                                    year = {2009},
                                    volume = {25},
                                    number = {1},
                                    pages = {277-280}
                                  }
                                  
                                • C. Fürweger, A. Schlaefer, C. Drexler (2009), "Robotic radiosurgery with fiducial-free tracking: patient motion and targeting precision in 227 spinal cases", 10th Biennial ESTRO Conference on Physics and Radiation Technology for Clinical Radiotherapy. Vol. 92(1),34. [BibTeX] [DOI]
                                • BibTeX:
                                  @conference{Fuerweger2009,
                                    author = {C. Fürweger and A. Schlaefer and C. Drexler},
                                    title = {Robotic radiosurgery with fiducial-free tracking: patient motion and targeting precision in 227 spinal cases},
                                    journal = {10th Biennial ESTRO Conference on Physics and Radiation Technology for Clinical Radiotherapy},
                                    year = {2009},
                                    volume = {92},
                                    number = {1},
                                    pages = {34},
                                    doi = {10.1016/S0167-8140(12)72675-0}
                                  }
                                  
                                • M. Heinig, A. Schlaefer, A. Schweikard (2009), "Super resolution in Optical Coherence Tomography (OCT)", Medical Physics and Biomedical Engineering World Congress., In Medical Physics and Biomedical Engineering World Congress. Munich [Abstract] [BibTeX]
                                • Abstract: Introduction: Optical Coherence Tomography (OCT) is a commonly used imaging technology in medicine, for example in ophthalmology, dermatology and urology. Some applications would benefit from higher spatial resolution and speckle noise reduction. We present a robust and effective method to enhance spatial resolution in OCT. The proposed method also reduces the inherent speckle noise. Materials and Methods: The probe of a Thorlabs FD-OCT spectral radar (frequency 1.2kHz, resolution 6.2um) was mounted on a piezo XY stage, pointing in direction of the X-axis. To acquire images, the probe was moved stepwise in longitudinal direction. Every step moved the probe a forth of the spatial resolution of the OCT system. After each step data was gathered from the OCT and stored at the appropriate position of the so called virtual detector array (VDA). The VDA’s data was processed by a super-resolution algorithm. Data for the same depth in the tissue was averaged to account for speckle noise. The resulting 1D image was low-pass filtered, yielding a low noise image at twice the resolution of the OCT system. To acquire 2D images, the probe was moved along the lateral direction using the piezo stage. Results: Canvas tape was used as phantom to test the system. Spatial resolution of the Thorlabs FD-OCT spectral radar was doubled from 6.2um to 3.1um. Images were acquired with and without using the super-resolution algorithm. The results show that speckle noise is substantially reduced and spatial resolution of the image is effectively doubled. Conclusion: Applying super-resolution algorithms to OCT yields promising results in enhancing resolution. A second benefit is clearly visible reduction of speckle noise. We plan to test the presented approach with real tissue and in-vivo to study its potential use for micro-navigation, e.g., in neurosurgery
                                  BibTeX:
                                  @conference{Heinig2009,
                                    author = {M. Heinig and A. Schlaefer and A. Schweikard},
                                    title = {Super resolution in Optical Coherence Tomography (OCT)},
                                    booktitle = {Medical Physics and Biomedical Engineering World Congress},
                                    journal = {Medical Physics and Biomedical Engineering World Congress},
                                    year = {2009}
                                  }
                                  
                                • W. Kilby, A. Schlaefer, J. Dooley, O. Blanck, E. Lessard, C. Maurer (2009), "Evaluation of the Clinical Utility of an Iris Collimator Combined with a Sequential Optimization Algorithm for Robotic Radiosurgery", In Proceedings of the World Congress 2009 for Medical Physics and Biomedical Engineering. Vol. 25,916-919. [BibTeX] [URL]
                                • BibTeX:
                                  @conference{Kilby2009,
                                    author = {W. Kilby and A. Schlaefer and J. Dooley and O. Blanck and E. Lessard and C. Maurer},
                                    title = {Evaluation of the Clinical Utility of an Iris Collimator Combined with a Sequential Optimization Algorithm for Robotic Radiosurgery},
                                    booktitle = {Proceedings of the World Congress 2009 for Medical Physics and Biomedical Engineering},
                                    year = {2009},
                                    volume = {25},
                                    pages = {916-919},
                                    url = {http://www.springer.com/de/book/9783642038976}
                                  }
                                  
                                • E. Lessard, W. Kilby, J. Dooley, C. Sims, A. Schlaefer, O. Blanck, C. Maurer (2009), "Sequential Optimization Scripts to Facilitate Treatment Planning for Robotic Radiosurgery Clinical Studies for Prostate and Lung Cancers", World Congress on Medical Physics and Biomedical Engineering IFMBE Proceedings., In Medical Physics and Biomedical Engineering World Congress. Vol. 25(1),1031-1034. [Abstract] [BibTeX]
                                • Abstract: Sequential Optimization, which features a scriptable optimization process, is the latest inverse planning tool available for robotic radiosurgery. The purpose of this study was to create and evaluate Sequential Optimization scripts to facilitate the treatment planning for clinical studies for radiosurgery treatment of prostate and lung cancer. Four sample scripts were designed to generate treatment plans according to the clinical objectives defined in current prostate and lung cancer protocols. The scripts were evaluated using a group of 10 prostate cancer cases and 10 lung cancer cases with tumors of various sizes and shapes. The scripts generated treatment plans with dose distributions within minor variations defined by the protocols for most cases and provided good starting points for the challenging cases
                                  BibTeX:
                                  @conference{Lessard2009,
                                    author = {E. Lessard and W. Kilby and J. Dooley and C. Sims and A. Schlaefer and O. Blanck and C. Maurer},
                                    title = {Sequential Optimization Scripts to Facilitate Treatment Planning for Robotic Radiosurgery Clinical Studies for Prostate and Lung Cancers},
                                    booktitle = {Medical Physics and Biomedical Engineering World Congress},
                                    journal = {World Congress on Medical Physics and Biomedical Engineering IFMBE Proceedings},
                                    year = {2009},
                                    volume = {25},
                                    number = {1},
                                    pages = {1031-1034}
                                  }
                                  
                                • A. Schlaefer (2009), "Workflow-Based Treatment Planning for Robotic Radiosurgery", Int J CARS 4 (Suppl 1)., In 23st International Congress and Exhibition Computer Assisted Radiology and Surgery CARS'2009. ,22. [BibTeX]
                                • BibTeX:
                                  @inproceedings{Schlaefer2009a,
                                    author = {A. Schlaefer},
                                    title = {Workflow-Based Treatment Planning for Robotic Radiosurgery},
                                    booktitle = {23st International Congress and Exhibition Computer Assisted Radiology and Surgery CARS'2009},
                                    journal = {Int J CARS 4 (Suppl 1)},
                                    year = {2009},
                                    pages = {22}
                                  }
                                  
                                • A. Schlaefer, D. Ruan, S. Dieterich, W. Kilby (2009), "A linear implementation of dose-volume constraints for multi-criteria optimization", Medical Physics and Biomedical Engineering World Congress. Munich, September, 2009. Vol. 25(1),IFMBE Proceedings 322-325. [Abstract] [BibTeX]
                                • Abstract: Dose-volume constraints are widely used in inverse planning for radiation therapy. Given the intricate combinatorial nature of the problem, existing approaches suffer from long runtimes, insufficient approximation of the constraints, or the necessity to specify reasonable starting values a priori. This can be problematic, particularly when planning is considered as a multi-criteria optimization problem. We present a new method to handle dose-volume constraints during planning for robotic radiosurgery. Taking into account the typically very conformal nature of the resulting dose distributions, we specify the constraints on a small subset of points instead of the full volume. We show how this allows for an effective relaxation of the problem. Results for a prostate case indicate that the proposed method leads to good approximations of the dose-volume constraints and is independent of the optimization objective.
                                  BibTeX:
                                  @conference{Schlaefer2009,
                                    author = {A. Schlaefer and D. Ruan and S. Dieterich and W. Kilby},
                                    title = {A linear implementation of dose-volume constraints for multi-criteria optimization},
                                    journal = {Medical Physics and Biomedical Engineering World Congress},
                                    year = {2009},
                                    volume = {25},
                                    number = {1},
                                    pages = {IFMBE Proceedings 322-325}
                                  }
                                  
                                • O. Witt, A. Schlaefer, L. Ramrath (2009), "A fuzzy segmentation approach to white matter detection in optical coherence tomography", In Poster Session of 23rd International Congress Computer Assisted Radiology and Surgery CARS'2009. Vol. 4(Suppl 1),328-329. [BibTeX] [DOI]
                                • BibTeX:
                                  @conference{Witt2009,
                                    author = {O. Witt and A. Schlaefer and L. Ramrath},
                                    title = {A fuzzy segmentation approach to white matter detection in optical coherence tomography},
                                    booktitle = {Poster Session of 23rd International Congress Computer Assisted Radiology and Surgery CARS'2009},
                                    year = {2009},
                                    volume = {4},
                                    number = {Suppl 1},
                                    pages = {328-329},
                                    doi = {10.1007/s11548-009-0343-9}
                                  }
                                  

                                  2008

                                  • S. Dieterich A. Schlaefer (2008), "SU-GG-J-76: Dosimetric Consequences of Patient Setup Decisions in Image-Guided Procedures Based On Soft-Tissue Fiducials for Imaging", Medical Physics. Vol. 35(6),2696-2696. [Abstract] [BibTeX] [DOI]
                                  • Abstract: Purpose: The purpose of this work is to demonstrate the dosimetric consequences of patient setup and tracking technique decisions for 4D adaptive SBRT treatments. Method and Materials: Clinical fiducial tracking scenarios of 450 SBRT cases, 120 of them treated with 4D adaptive SBRT, were analyzed and classified into different case scenarios. A flowchart was created to systematically display each clinical tracking scenario and the decision options. Based on the flowchart, scenarios which could cause a deviation from delivered dose to planned dose were identified. For these situations, each tracking decision was evaluated for dosimetric impact by simulating the situation in treatment planning software using a patient CT and an artificially created elliptical tumor.Results: Only two clinical scenarios were identified as having dosimetric consequences. One scenario contains tumors which rotate significantly (>5 degrees) during the respiratory cycles. For rotations smaller than 5 degrees we saw no differences in the DVH than non-rotating tumors. For tumors with rotation angles larger than 6 degrees, the DVH shows increasing, but still small, tumor underdose. The second scenario with potential dosimetry changes was a small rotational change (<5 degrees) of the tumor position relative to the global patient position. Our calculations show that changing the global patient position to move the tumor into the treatment field did change the DVH, because the SSD and obliquity of incoming beams changes. A third tracking scenario was identified in which a repeat simulation is necessary. Conclusion: Fiducial setup and rotational tumor tracking decisions for SBRT treatments were classified. Dosimetric impact was studied for the two relevant class decisions. A class of patients which needed re-CT was identified. Our flowchart and dosimetry studies will help in the future to systematically identify and address soft-tissue fiducial tracking scenarios and choice of tracking technique for SBRT.
                                    BibTeX:
                                    @article{Dieterich2008,
                                      author = {S. Dieterich and A. Schlaefer},
                                      title = {SU-GG-J-76: Dosimetric Consequences of Patient Setup Decisions in Image-Guided Procedures Based On Soft-Tissue Fiducials for Imaging},
                                      journal = {Medical Physics},
                                      year = {2008},
                                      volume = {35},
                                      number = {6},
                                      pages = {2696-2696},
                                      doi = {10.1118/1.2961626}
                                    }
                                    
                                  • F. Ernst, A. Schlaefer, S. Dieterich, A. Schweikard (2008), "A Fast Lane Approach to LMS Prediction of Respiratory Motion Signals", Biomedical Signal Processing and Control., October, 2008. Vol. 3(4),291-299. [Abstract] [BibTeX] [DOI]
                                  • Abstract: As a tool for predicting stationary signals, the Least Mean Squares (LMS) algorithm is widely used. Its improvement, the family of normalised LMS algorithms, is known to outperform this algorithm. However, they still remain sensitive to selecting wrong parameters, being the learning coefficient u and the signal history length M. We propose an improved version of both algorithms using a Fast Lane Approach, based on parallel evaluation of several competing predictors. These were applied to respiratory motion data from motion-compensated radiosurgery. Prediction was performed using arbitrarily selected values for the learning coefficient u € [0,0.3] and the signal history length M € [1,15]. The results were compared to prediction using the globally optimal values of u and M found using a grid search. When the learning algorithm is seeded using locally optimal values (found using a grid search on the first 96 s of data), more than 44% of the test cases outperform the globally optimal result. In about 38% of the cases, the result comes to within 5% and, in about 9% of the cases, to within 5-10% of the global optimum. This indicates that the Fast Lane Approach is a robust method for selecting the parameters u and M
                                    BibTeX:
                                    @article{Ernst2008,
                                      author = {F. Ernst and A. Schlaefer and S. Dieterich and A. Schweikard},
                                      title = {A Fast Lane Approach to LMS Prediction of Respiratory Motion Signals},
                                      journal = {Biomedical Signal Processing and Control},
                                      year = {2008},
                                      volume = {3},
                                      number = {4},
                                      pages = {291-299},
                                      doi = {10.1016/j.bspc.2008.06.001}
                                    }
                                    
                                  • A. Schlaefer A. Schweikard (2008), "Stepwise multi-criteria optimization for robotic radiosurgery", Med Phys. Vol. 35(5),2094-2103. [Abstract] [BibTeX] [DOI]
                                  • Abstract: Achieving good conformality and a steep dose gradient around the target volume remains a key aspect of radiosurgery. Clearly, this involves a trade-off between target coverage, conformality of the dose distribution, and sparing of critical structures. Yet, image guidance and robotic beam placement have extended highly conformal dose delivery to extracranial and moving targets. Therefore, the multi-criteria nature of the optimization problem becomes even more apparent, as multiple conflicting clinical goals need to be considered coordinate to obtain an optimal treatment plan. Typically, planning for roboticradiosurgery is based on constrained optimization, namely linear programming. An extension of that approach is presented, such that each of the clinical goals can be addressed separately and in any sequential order. For a set of common clinical goals the mapping to a mathematical objective and a corresponding constraint is defined. The trade-off among the clinical goals is explored by modifying the constraints and optimizing a simple objective, while retaining feasibility of the solution. Moreover, it becomes immediately obvious whether a desired goal can be achieved and where a trade-off is possible. No importance factors or predefined prioritizations of clinical goals are necessary. The presented framework forms the basis for interactive and automated planning procedures. It is demonstrated for a sample case that the linear programming formulation is suitable to search for a clinically optimal treatment, and that the optimization steps can be performed quickly to establish that a Pareto-efficient solution has been found. Furthermore, it is demonstrated how the stepwise approach is preferable compared to modifying importance factors
                                    BibTeX:
                                    @article{Schlaefer2008c,
                                      author = {A. Schlaefer and A. Schweikard,},
                                      title = {Stepwise multi-criteria optimization for robotic radiosurgery},
                                      journal = {Med Phys},
                                      year = {2008},
                                      volume = {35},
                                      number = {5},
                                      pages = {2094-2103},
                                      doi = {10.1118/1.2900716}
                                    }
                                    
                                  • A. Schlaefer O. Blanck (2008), "Establishing a Trade-Off Between Number of Beams and Plan Quality in Robotic Radiosurgery", Medical Physics. Vol. 35(5),2638. [Abstract] [BibTeX] [DOI]
                                  • Abstract: Purpose: To study the potential trade?off between the number of beams and the plan quality in roboticradiosurgery. Specifically, to assess, whether the number of beams can be reduced by repeating the series of optimization steps on a subset of substantially weighted beams. Method and Materials: We use a linear programming formulation of the planning problem, where objective terms are matched by corresponding constraints. The optimization is decomposed into a series of steps. When a plan with acceptable quality has been obtained, the activation time of the beams is studied. Beams with an activation time below a threshold are removed from the plan. We then compare the effect of (A) rescaling the weight of the remaining beams to obtain at least 98% of the previous coverage, and (B) to re-optimize the beam weights by applying the series of optimization steps to the reduced set of beams. The methods are applied to three clinical cases, a spinal lesion, a head and neck tumor, and a prostate case. Results: We removed up to 17.6%, 52.3%, and 28.4% of the beams for the spinal, the head and neck, and the prostate case, respectively, while retaining the plan quality with reoptimization. In contrast, rescaling changed the dose distribution substantially and the plan quality metrics degraded to an unacceptable level. Conclusion: Removing low weighted beams can reduce the number of active beams, and hence the overall treatment time. Reoptimization using the original series of optimization steps leads to better plan quality compared with rescaling. The potential to reduce the number of beams while retaining plan quality depends on the clinical case, and the fraction of beams that is removed. Conflict of Interest: Research partially sponsored by Accuray Inc
                                    BibTeX:
                                    @article{Schlaefer2008a,
                                      author = {A. Schlaefer and O. Blanck},
                                      title = {Establishing a Trade-Off Between Number of Beams and Plan Quality in Robotic Radiosurgery},
                                      journal = {Medical Physics},
                                      year = {2008},
                                      volume = {35},
                                      number = {5},
                                      pages = {2638},
                                      doi = {10.1118/1.2961382}
                                    }
                                    
                                  • A. Schlaefer O. Blanck (2008), "TU-EE-A1-05: Exploring the Spatial Trade-Off in Treatment Planning", Medical Physics., In Medical Physics. Vol. 35(6),2910. [Abstract] [BibTeX] [DOI]
                                  • Abstract: Purpose: To include spatial information during multi-criteria treatment planning. Particularly, to study whether constrained optimization on the voxel level allows to deliberately trade-off the dose delivered to one region of a volume of interest (VOI) with respect to other clinical goals. Method and Materials: We extended a stepwise optimization method for roboticradiosurgery to interactively modify dose constraints on a voxel level. The optimization problem is solved using linear programming, and every term in the objective function is matched by a corresponding constraint. Clinical goals are addressed separately and maintained using the constraints. A trade-off among the clinical goals is then explored by a series of optimization steps. For visualization, VOIs are represented by a 3D grid of spheres, where each sphere represents a voxel and can be selected in a 3D scene. Constraints on the dose in the selected voxels can be considered independently for subsequent optimization steps. The method was applied to a prostate case, where we studied trade-offs with respect to the maximum dose in the rectum. Results: Relaxing the upper dose bound on a set of voxels in the prostate lobes by 150 cGy allowed to reduce the maximum rectum dose by 100 cGy. Likewise, a relaxation of the lower dose bound on a few voxels on the prostate surface by 100 cGy allowed to further reduce the maximum dose in th rectum by 157 cGy. Conclusion: Spatial information is not available from cumulative statistics typically used as criteria for treatment planning. Our results indicate that it is possible to include spatial information in interactive multi-criteria optimization. The proposed method can be used when clinical goals can be expressed with respect to a subregion of a VOI. Conflict of Interest: Research partially sponsored by Accuray Inc
                                    BibTeX:
                                    @article{Schlaefer2008b,
                                      author = {A. Schlaefer and O. Blanck},
                                      title = {TU-EE-A1-05: Exploring the Spatial Trade-Off in Treatment Planning},
                                      booktitle = {Medical Physics},
                                      journal = {Medical Physics},
                                      year = {2008},
                                      volume = {35},
                                      number = {6},
                                      pages = {2910},
                                      doi = {10.1118/1.2962609}
                                    }
                                    
                                  • A. Schlaefer, J. Gill, A. Schweikard (2008), "A simulation and training environment for robotic radiosurgery", International Journal of Computer Assisted Radiology and Surgery. Vol. 3(3),267-274. [BibTeX] [DOI]
                                  • BibTeX:
                                    @article{Schlaefer2008,
                                      author = {A. Schlaefer and J. Gill and A. Schweikard},
                                      title = {A simulation and training environment for robotic radiosurgery},
                                      journal = {International Journal of Computer Assisted Radiology and Surgery},
                                      year = {2008},
                                      volume = {3},
                                      number = {3},
                                      pages = {267-274},
                                      doi = {10.1007/s11548-008-0159-z}
                                    }
                                    
                                  • A. Schlaefer O. Jungmann (2008), "Plan quality versus number of beam source positions in image guided robotic radiosurgery", In Computer Assisted Radiology and Surgery (CARS'2008). [BibTeX]
                                  • BibTeX:
                                    @conference{Schlaefer2008d,
                                      author = {A. Schlaefer and O. Jungmann},
                                      title = {Plan quality versus number of beam source positions in image guided robotic radiosurgery},
                                      booktitle = {Computer Assisted Radiology and Surgery (CARS'2008)},
                                      year = {2008}
                                    }
                                    

                                    2007

                                    • F. Ernst, R. Bruder, A. Schlaefer (2007), "Processing of Respiratory Signals from Tracking Systems for Motion Compensated IGRT", Medical Physics., In 47th Annual Meeting of the American Association of Physicists in Medicine., July, 2007. Vol. 34(6),2565. [Abstract] [BibTeX] [DOI]
                                    • Abstract: Purpose: Improving the quality of signals obtained with optical and magnetic tracking systems. Special focus is placed on the measurement of respiratory motion signals for motion compensated IGRT and the possibility of filtering this data to obtain low-noise breathing signals. Method and Materials: The accuracy of five different tracking systems (NDI Polaristexttrademark, active and passive, Clarion MicronTrackertexttrademark, BIG FP5000, NDI Auroratexttrademark) was examined by (a) tracking stationary markers over several hours, and (b) by attaching the markers to a Kuka KR16 robot to simulate human respiration. The à trous wavelet decomposition was used to decompose the measured signal into scales, and to remove scales related to high frequencies, i.e., noise. The method was applied to a sinusoidal signal with artificial noise modeled according to (a), to real measurements for a sinusoidal motion of the robot, and to a set of breathing motion data from an actual patient treated with the CyberKnifetextregistered. Motion prediction was applied to the data. Results: The error on the measurements of the stationary marker approaches a Gaussian distribution. For a tracking rate of 60 Hz, information related to breathing motion is represented by higher scales of the à trous wavelet decomposition. Removing the first three scales and resconstructing the signal from the remaining scales and trend it is possible to obtain close and smooth approximations of the original signal. The normalized RMS error for motion prediction is 0.3368 mm and 0.1378 mm for a simulated and the smoothed signal using normalized LMS prediction. Conclusion: Data from tracking devices is subject to device specific measurement noise. The à trous wavelet decomposition can be used to remove frequencies related to noise from measured breathing signals. The resulting signal is suitable for further processing, e.g., correlation with or prediction of tumor motion in the context of motion compensated IGRT.
                                      BibTeX:
                                      @conference{Ernst2007a,
                                        author = {F. Ernst and R. Bruder and A. Schlaefer},
                                        title = {Processing of Respiratory Signals from Tracking Systems for Motion Compensated IGRT},
                                        booktitle = {47th Annual Meeting of the American Association of Physicists in Medicine},
                                        journal = {Medical Physics},
                                        year = {2007},
                                        volume = {34},
                                        number = {6},
                                        pages = {2565},
                                        doi = {10.1118/1.2761413}
                                      }
                                      
                                    • F. Ernst, A. Schlaefer, A. Schweikard (2007), "Prediction of respiratory motion with wavelet-based multiscale autoregression", In Med Image Comput Comput Assist Interv.. Thesis at: Institute of Robotics and Cognitive Systems, University of Lübeck, DE. ernst@rob.uni-luebeck.de. Brisbane, Australia Vol. 10(Pt 2),668-675. [Abstract] [BibTeX] [DOI]
                                    • Abstract: In robotic radiosurgery, a photon beam source, moved by a robot arm, is used to ablate tumors. The accuracy of the treatment can be improved by predicting respiratory motion to compensate for system delay. We consider a wavelet-based multiscale autoregressive prediction method. The algorithm is extended by introducing a new exponential averaging parameter and the use of the Moore-Penrose pseudo inverse to cope with long-term signal dependencies and system matrix irregularity, respectively. In test cases, this new algorithm outperforms normalized LMS predictors by as much as 50%. With real patient data, we achieve an improvement of around 5 to 10%.
                                      BibTeX:
                                      @inproceedings{Ernst2007,
                                        author = {F. Ernst and A. Schlaefer and A. Schweikard},
                                        title = {Prediction of respiratory motion with wavelet-based multiscale autoregression},
                                        booktitle = {Med Image Comput Comput Assist Interv},
                                        school = {Institute of Robotics and Cognitive Systems, University of Lübeck, DE. ernst@rob.uni-luebeck.de},
                                        year = {2007},
                                        volume = {10},
                                        number = {Pt 2},
                                        pages = {668-675},
                                        doi = {10.1007/s11548-007-0083-7}
                                      }
                                      
                                    • L. Ramrath, A. Schlaefer, F. Ernst, S. Dieterich, A. Schweikard (2007), "Prediction of respiratory motion with a multi-frequency based Extended Kalman Filter", In International Journal of Computer Assisted Radiology and Surgery CARS'2007. Vol. 21(2),56-58. [Abstract] [BibTeX] [DOI]
                                    • Abstract: In this work, an Extended Kalman Filter formulation for respiration motion tracking is introduced. Based on the assumption of multiple sinusoidal components contributing to respiratory motion, a state-space model is developed. Performance of the filter is tested on data sets of patients subject to radiotherapy. Comparison to an nLMS predictor shows that the Kalman filter is less sensitive to systematic errors during target prediction.
                                      BibTeX:
                                      @inproceedings{Ramrath2007,
                                        author = {L. Ramrath and A. Schlaefer and F. Ernst and S. Dieterich and A. Schweikard},
                                        title = {Prediction of respiratory motion with a multi-frequency based Extended Kalman Filter},
                                        booktitle = {International Journal of Computer Assisted Radiology and Surgery CARS'2007},
                                        year = {2007},
                                        volume = {21},
                                        number = {2},
                                        pages = {56-58},
                                        doi = {10.1007/s11548-007-0083-7}
                                      }
                                      
                                    • A. Schlaefer, O. Blanck, A. Schweikard (2007), "Interactive Multi-criteria Inverse Planning for Robotic Radiosurgery", XVth International Conference on the Use of Computers in Radiation Therapy (ICCR 2007)., In XVth International Conference on the Use of Computers in Radiation Therapy (ICCR). Toronto, Canada [BibTeX]
                                    • BibTeX:
                                      @conference{Schlaefer2007,
                                        author = {A. Schlaefer and O. Blanck and A. Schweikard},
                                        title = {Interactive Multi-criteria Inverse Planning for Robotic Radiosurgery},
                                        booktitle = {XVth International Conference on the Use of Computers in Radiation Therapy (ICCR)},
                                        journal = {XVth International Conference on the Use of Computers in Radiation Therapy (ICCR 2007)},
                                        year = {2007}
                                      }
                                      
                                    • A. Schlaefer, O. Jungmann, W. Kilby, A. Schweikard (2007), "Objective specific beam generation for image guided robotic radiosurgery", In 21st International Congress and Exhibition Computer Assisted Radiology and Surgery CARS'2007. Berlin, Germany [BibTeX]
                                    • BibTeX:
                                      @conference{Schlaefer2007a,
                                        author = {A. Schlaefer and O. Jungmann and W. Kilby and A. Schweikard},
                                        title = {Objective specific beam generation for image guided robotic radiosurgery},
                                        booktitle = {21st International Congress and Exhibition Computer Assisted Radiology and Surgery CARS'2007},
                                        year = {2007}
                                      }
                                      

                                      2006

                                      • O. Blanck, A. Schlaefer, A. Schweikard (2006), "3D visualization of radiosurgical treatment plans experience with Java3D and VTK", Computational Modeling of Objects Represented in Images-Fundamentals, Methods and Applications, First International Symposium CompIMAGE., In Computational Modelling of Objects Represented in Images, Proceedings of the International Symposium CompIMAGE. Coimbra, Portugal, October, 2006. ,101-106. [BibTeX] [URL]
                                      • BibTeX:
                                        @inproceedings{Blanck2006,
                                          author = {O. Blanck and A. Schlaefer and A. Schweikard},
                                          title = {3D visualization of radiosurgical treatment plans experience with Java3D and VTK},
                                          booktitle = {Computational Modelling of Objects Represented in Images, Proceedings of the International Symposium CompIMAGE},
                                          journal = {Computational Modeling of Objects Represented in Images-Fundamentals, Methods and Applications, First International Symposium CompIMAGE},
                                          year = {2006},
                                          pages = {101-106},
                                          url = {http://dblp.uni-trier.de/rec/bib/conf/compimage/BlanckSS06}
                                        }
                                        
                                      • P. Romanelli, A. Schweikard, A. Schlaefer, J. Adler (2006), "Computer aided robotic surgery", Comput Aided Surg. Vol. 11(4),161-174. [Abstract] [BibTeX] [DOI]
                                      • Abstract: Radiosurgery involves the precise delivery of sharply collimated high-energy beams of radiation to a distinct target volume along selected trajectories. Historically, accurate targeting required the application of a stereotactic frame, thus limiting the use of this procedure to single treatments of selected intracranial lesions. However, the scope of radiosurgery has undergone a remarkable broadening since the introduction of image-guided robotic radiosurgery. Recent developments in real-time image guidance provide an effective frameless alternative to conventional radiosurgery and allow both the treatment of lesions outside the skull and the possibility of performing hypofractionation. As a consequence, targets in the spine, chest and abdomen can now also be radiosurgically ablated with submillimetric precision. Meanwhile, the combination of image guidance, robotic beam delivery, and non-isocentric inverse planning can greatly enhance the conformality and homogeneity of radiosurgery. The aim of this article is to describe the technological basis of image-guided radiosurgery and provide a perspective on future developments. The current clinical usage of robotic radiosurgery will be reviewed with an emphasis on those applications that may represent a major shift in the therapeutic paradigm
                                        BibTeX:
                                        @article{Romanelli2006,
                                          author = {P. Romanelli and A. Schweikard and A. Schlaefer and J. Adler},
                                          title = {Computer aided robotic surgery},
                                          journal = {Comput Aided Surg},
                                          year = {2006},
                                          volume = {11},
                                          number = {4},
                                          pages = {161-174},
                                          doi = {10.3109/10929080600886393}
                                        }
                                        
                                      • A. Schlaefer, O. Blanck, A. Muacevic, A. Schweikard (2006), "Inverse Planung für die robotergestützte Strahlentherapie: Strahlauswahl und Gewichtung" [BibTeX]
                                      • BibTeX:
                                        @conference{Schlaefer2006,
                                          author = {A. Schlaefer and O. Blanck and A. Muacevic and A. Schweikard},
                                          title = {Inverse Planung für die robotergestützte Strahlentherapie: Strahlauswahl und Gewichtung},
                                          year = {2006}
                                        }
                                        
                                      • A. Schlaefer, O. Blanck, H. Shiomi, A. Schweikard (2006), "An iterative beam placement approach for image guided robotic radiosurgery", International Journal of Computer Assisted Radiology and Surgery (CARS). Vol. 1, Supplement 1,226-228. [BibTeX]
                                      • BibTeX:
                                        @article{Schlaefer2006a,
                                          author = {A. Schlaefer and O. Blanck and H. Shiomi and A. Schweikard},
                                          title = {An iterative beam placement approach for image guided robotic radiosurgery},
                                          journal = {International Journal of Computer Assisted Radiology and Surgery (CARS)},
                                          year = {2006},
                                          volume = {1, Supplement 1},
                                          pages = {226-228}
                                        }
                                        
                                      • A. Schweikard, A. Schlaefer, J. R. A. Jr (2006), "Resampling: an optimization method for inverse planning in robotic radiosurgery", Med Phys. Vol. 33(11),4005-4011. [Abstract] [BibTeX] [DOI]
                                      • Abstract: By design, the range of beam directions in conventional radiosurgery are constrained to an isocentric array. However, the recent introduction of roboticradiosurgery dramatically increases the flexibility of targeting, and as a consequence, beams need be neither coplanar nor isocentric. Such a nonisocentric design permits a large number of distinct beam directions to be used in one single treatment. These major technical differences provide an opportunity to improve upon the well-established principles for treatment planning used with GammaKnife or LINACradiosurgery. With this objective in mind, our group has developed over the past decade an inverse planning tool for roboticradiosurgery. This system first computes a set of beam directions, and then during an optimization step, weights each individual beam. Optimization begins with a feasibility query, the answer to which is derived through linear programming. This approach offers the advantage of completeness and avoids local optima. Final beam selection is based on heuristics. In this report we present and evaluate a new strategy for utilizing the advantages of linear programming to improve beam selection. Starting from an initial solution, a heuristically determined set of beams is added to the optimization problem, while beams with zero weight are removed. This process is repeated to sample a set of beams much larger compared with typical optimization. Experimental results indicate that the planning approach efficiently finds acceptable plans and that resampling can further improve its efficiency
                                        BibTeX:
                                        @article{Schweikard2006,
                                          author = {A. Schweikard and A. Schlaefer and J. R. Adler Jr},
                                          title = {Resampling: an optimization method for inverse planning in robotic radiosurgery},
                                          journal = {Med Phys},
                                          year = {2006},
                                          volume = {33},
                                          number = {11},
                                          pages = {4005-4011},
                                          doi = {10.1118/1.2357020}
                                        }
                                        

                                        2005

                                        • L. Fritsche, J. Hoerstrup, K. Budde, P. Reinke, H. H. Neumayer, U. Frei, A. Schlaefer (2005), "Accurate prediction of kidney allograft outcome based on creatinine course in the first 6 months posttransplant", Transplant Proceedings. Vol. 37(2),731-733. [Abstract] [BibTeX] [DOI]
                                        • Abstract: Most attempts to predict early kidney allograft loss are based on the patient and donor characteristics at baseline. We investigated how the early posttransplant creatinine course compares to baseline information in the prediction of kidney graft failure within the first 4 years after transplantation. Two approaches to create a prediction rule for early graft failure were evaluated. First, the whole data set was analysed using a decision-tree building software. The software, rpart, builds classification or regression models; the resulting models can be represented as binary trees. In the second approach, a Hill-Climbing algorithm was applied to define cut-off values for the median creatinine level and creatinine slope in the period between day 60 and 180 after transplantation. Of the 497 patients available for analysis, 52 (10.5%) experienced an early graft loss (graft loss within the first 4 years after transplantation). From the rpart algorithm, a single decision criterion emerged: Median creatinine value on days 60 to 180 higher than 3.1 mg/dL predicts early graft failure (accuracy 95.2% but sensitivity = 42.3%). In contrast, the Hill-Climbing algorithm delivered a cut-off of 1.8 mg/dL for the median creatinine level and a cut-off of 0.3 mg/dL per month for the creatinine slope (sensitivity = 69.5% and specificity 79.0%). Prediction rules based on median and slope of creatinine levels in the first half year after transplantation allow early identification of patients who are at risk of loosing their graft early after transplantation. These patients may benefit from therapeutic measures tailored for this high-risk setting
                                          BibTeX:
                                          @article{Fritsche2005,
                                            author = {L. Fritsche and J. Hoerstrup and K. Budde and P. Reinke and H. H. Neumayer and U. Frei and A. Schlaefer},
                                            title = {Accurate prediction of kidney allograft outcome based on creatinine course in the first 6 months posttransplant},
                                            journal = {Transplant Proceedings},
                                            year = {2005},
                                            volume = {37},
                                            number = {2},
                                            pages = {731-733},
                                            doi = {10.1016/j.transproceed.2004.12.067}
                                          }
                                          
                                        • A. Schlaefer, O. Blanck, A. Schweikard (2005), "WE-C-I-609-09: Autostereoscopic Display of the 3D Dose Distribution to Assess Beam Placement for Robotic Radiosurgery", Medical Physics. Vol. 32(6),2122. [Abstract] [BibTeX] [DOI]
                                        • Abstract: Purpose: To study whether a 3D view of the dose distribution and treatment beams on an autostereoscopic display facilitates a ‘smart’ placement of additional beams for roboticradiosurgery. Method and Materials:Treatment plans for roboticradiosurgery with the CyberKnife system (Accuray Inc., Sunnyvale) consist of a large number of non?isocentrical, cylindrical beams directed towards arbitrary points within the target volume. We implemented a tool to visualize the resulting 3D dose distribution and the beam directions using the visualization toolkit (VTK). A hypsometric color scheme allows to identify cold and hot spots in the target volume, i.e. regions where the dose is close to the lower or upper bound specified for the target. Given this information we manually added a 20 beams to an existing treatment plan with 1200 beams for an intracranial tumor. The beams where placed such that a large number of cold voxels were hit but hot voxels were avoided. To assess the spatial extent of the cold and hot regions and the orientation of the beams an autostereoscopic display (SeeReal Technologies GmbH, Dresden) was used. An inverse planning algorithm similar to the one used by the CyberKnife system was implemented to re?optimize the plan, the result was compared to the original plan. Results: The original plan consisted of 119 weighted beams with an accumulated weight of 21763.3 MU. Adding 20 beams we obtained a plan with 123 beams with the total weight reduced to 21610.7 MU. All 20 new beams got the maximum weight of 250 MU per beam, i.e. other, less efficient beams were discarded by the optimizer. Conclusion: The visualization tool proved to be useful in the guidance of beam placement. A direction of additional beams towards cold spots in the target volume can improve the plan quality
                                          BibTeX:
                                          @article{Schlaefer2005,
                                            author = {A. Schlaefer and O. Blanck and A. Schweikard},
                                            title = {WE-C-I-609-09: Autostereoscopic Display of the 3D Dose Distribution to Assess Beam Placement for Robotic Radiosurgery},
                                            journal = {Medical Physics},
                                            year = {2005},
                                            volume = {32},
                                            number = {6},
                                            pages = {2122},
                                            doi = {10.1118/1.1998500}
                                          }
                                          
                                        • A. Schlaefer, O. Blanck, H. Shiomi, A. Schweikard (2005), "Radiochirurgie: Identifizierung effizienter Behandlungsstrahlen mittels autostereoskopischer Visualisierung", In Jahrestagung der Deutschen Gesellschaft für Computer- und Roboterassistierte Chirurgie CURAC'2005. [BibTeX]
                                        • BibTeX:
                                          @inproceedings{Schlaefer2005c,
                                            author = {A. Schlaefer and O. Blanck and H. Shiomi and A. Schweikard},
                                            title = {Radiochirurgie: Identifizierung effizienter Behandlungsstrahlen mittels autostereoskopischer Visualisierung},
                                            booktitle = {Jahrestagung der Deutschen Gesellschaft für Computer- und Roboterassistierte Chirurgie CURAC'2005},
                                            year = {2005}
                                          }
                                          
                                        • A. Schlaefer, S. Dieterich, A. Schweikard (2005), "Berücksichtigung interfraktionaler Bewegungen bei der Behandlungsplanung für die bewegungskompensierte Strahlenchirurgie", In Jahrestagung der Deutschen Gesellschaft für Computer- und Roboterassistierte Chirurgie CURAC'2005. [BibTeX]
                                        • BibTeX:
                                          @conference{Schlaefer2005b,
                                            author = {A. Schlaefer and S. Dieterich and A. Schweikard},
                                            title = {Berücksichtigung interfraktionaler Bewegungen bei der Behandlungsplanung für die bewegungskompensierte Strahlenchirurgie},
                                            booktitle = {Jahrestagung der Deutschen Gesellschaft für Computer- und Roboterassistierte Chirurgie CURAC'2005},
                                            year = {2005}
                                          }
                                          
                                        • A. Schlaefer, J. Fisseler, S. Dieterich, H. Shiomi, K. Cleary, A. Schweikard (2005), "Feasibility of four-dimensional conformal planning for robotic radiosurgery", Med Phys. Vol. 32(12),3786-3792. [Abstract] [BibTeX] [DOI]
                                        • Abstract: Organ motion can have a severe impact on the dose delivered by radiation therapy, and different procedures have been developed to address its effects. Conventional techniques include breath hold methods and gating. A different approach is the compensation for target motion by moving the treatment beams synchronously. Practical results have been reported for robot based radiosurgery, where a linear accelerator mounted on a robotic arm delivers the dose. However, not all organs move in the same way, which results in a relative motion of the beams with respect to the body and the tissues in the proximity of the tumor. This relative motion can severely effect the dose delivered to critical structures. We propose a method to incorporate motion in the treatment planning for roboticradiosurgery to avoid potential overdosing of organs surrounding the target. The method takes into account the motion of all considered volumes, which is discretized for dose calculations. Similarly, the beam motion is taken into account and the aggregated dose coefficient over all discrete steps is used for planning. We simulated the treatment of a moving target with three different planning methods. First, we computed beam weights based on a 3D planning situation and simulated treatment with organ motion and the beams moving synchronously to the target. Second, beam weights were computed by the 4D planning method incorporating the organ and beam motion and treatment was simulated for beams moving synchronously to the target. Third, the beam weights were determined by the 4D planning method with the beams fixed during planning and simulation. For comparison we also give results for the 3D treatment plan if there was no organ motion and when the plan is delivered by fixed beams in the presence of organ motion. The results indicate that the new 4D method is preferable and can further improve the overall conformality of motion compensated roboticradiosurgery.
                                          BibTeX:
                                          @article{Schlaefer2005a,
                                            author = {A. Schlaefer and J. Fisseler and S. Dieterich and H. Shiomi and K. Cleary and A. Schweikard},
                                            title = {Feasibility of four-dimensional conformal planning for robotic radiosurgery},
                                            journal = {Med Phys},
                                            year = {2005},
                                            volume = {32},
                                            number = {12},
                                            pages = {3786-3792},
                                            doi = {10.1118/1.2122607}
                                          }
                                          

                                          2004

                                          • A. Schlaefer, P. Kneschaurek, A. Schweikard (2004), "Beam placement for robotic radiosurgery", Computer Assisted Radiology and Surgery. Proceedings of the 18th International Congress and Exhibition CARS'2004., In Computer Assisted Radiology and Surgery (CARS), Proceedings of the 18th International Congress and Exhibition. Chicago, USA, June, 2004. Vol. 1268,1235. Elsevier. [BibTeX] [DOI]
                                          • BibTeX:
                                            @inproceedings{Schlaefer2004,
                                              author = {A. Schlaefer and P. Kneschaurek and A. Schweikard},
                                              title = {Beam placement for robotic radiosurgery},
                                              booktitle = {Computer Assisted Radiology and Surgery (CARS), Proceedings of the 18th International Congress and Exhibition},
                                              journal = {Computer Assisted Radiology and Surgery. Proceedings of the 18th International Congress and Exhibition CARS'2004},
                                              publisher = {Elsevier},
                                              year = {2004},
                                              volume = {1268},
                                              pages = {1235},
                                              doi = {10.1016/j.ics.2004.03.025}
                                            }
                                            

                                            2002

                                            • L. Fritsche, A. Schlaefer, K. Budde, K. Schroeter, H. H. Neumayer (2002), "Recognition of critical situations from time series of laboratory results by case-based reasoning", J Am Med Inform Assoc. Vol. 9(5),520-528. [Abstract] [BibTeX]
                                            • Abstract: OBJECTIVE: To develop a technique for recognizing critical situations based on laboratory results in settings in which a normal range cannot be defined, because what is ʼnormal differs widely from patient to patient. To assess the potential of this approach for kidney transplant recipients, where recognition of acute rejections is based on the pattern of changes in serum creatinine. DESIGN: We developed a case-based reasoning algorithm using dynamic time-warping as the measure of similarity which allows comparison of series of infrequent measurements at irregular intervals for retrieval of the most similar historical cases for the assessment of a new situation. MEASUREMENTS: The ability to recognize creatinine courses associated with an acute rejection was tested for a set of cases from a database of transplant patient records and compared with the diagnostic performance of experienced physicians. Tests were performed with case bases of various sizes. RESULTS: The accuracy of the algorithm increased steadily with the size of the available case base. With the largest case bases, the case-based algorithm reached an accuracy of 78 +/- 2%, which is significantly higher than the performance of experienced physicians (69 +/- 5.3%) (p < 0.001). CONCLUSION: The new case-based reasoning algorithm with dynamic time warping as the measure of similarity allows extension of the use of automatic laboratory alerting systems to conditions in which abnormal laboratory results are the norm and critical states can be detected only by recognition of pathological changes over time
                                              BibTeX:
                                              @article{Fritsche2002,
                                                author = {L. Fritsche and A. Schlaefer and K. Budde and K. Schroeter and H. H. Neumayer},
                                                title = {Recognition of critical situations from time series of laboratory results by case-based reasoning},
                                                journal = {J Am Med Inform Assoc},
                                                year = {2002},
                                                volume = {9},
                                                number = {5},
                                                pages = {520-528}
                                              }
                                              

                                              2001

                                              • L. Fritsche, A. Schlaefer, K. Budde, K. Schroeter, H. H. Neumayer (2001), "Case-based reasoning algorithm for kidney transplant monitoring", Transplant Proc., November-December, 2001. Vol. 33(7-8),3331-3333. [BibTeX] [DOI]
                                              • BibTeX:
                                                @article{Fritsche2001,
                                                  author = {L. Fritsche and A. Schlaefer and K. Budde and K. Schroeter and H. H. Neumayer},
                                                  title = {Case-based reasoning algorithm for kidney transplant monitoring},
                                                  journal = {Transplant Proc},
                                                  year = {2001},
                                                  volume = {33},
                                                  number = {7-8},
                                                  pages = {3331-3333},
                                                  doi = {10.1016/S0041-1345(01)02434-4}
                                                }
                                                
                                              • A. Schlaefer, K. Schroeter, L. Fritsche (2001), "A Case-Based Approach for the Classification of Medical Time Series", J. Crespo, V. Maojo, F. Martin (Eds.): Medical Data Analysis. Vol. 2199,258-263. [Abstract] [BibTeX] [URL]
                                              • Abstract: An early and reliable detection of rejections is most important for the successful treatment of renal transplantation patients. A good indicator for the renal function of transplanted patients is the course over time of the parameter creatinine. Existing systems for the analysis of time series usually require frequent and equidistant measurements or a well defined medical theory. These requirements are not fulfilled in our application domain. In this paper we present a case-based approach to classify a creatinine course as critical or non-critical. The distance measure used to find similar cases is based on linear regression. Our results show that while having a good specificity, our sensitivity is significantly higher than that of physicians
                                                BibTeX:
                                                @article{Schlaefer2001,
                                                  author = {A. Schlaefer and K. Schroeter and L. Fritsche},
                                                  title = {A Case-Based Approach for the Classification of Medical Time Series},
                                                  journal = {J. Crespo, V. Maojo, F. Martin (Eds.): Medical Data Analysis},
                                                  year = {2001},
                                                  volume = {2199},
                                                  pages = {258-263},
                                                  note = {Springer LNCS 2199},
                                                  url = {http://link.springer.com/chapter/10.1007%2F3-540-45497-7_39}
                                                }
                                                

                                                2000

                                                • G. Lindemann, L. Fritsche, K. Schröter, A. Schlaefer, K. Budde, H. H. Neumayer (2000), "A Web-Based Patient Record for Hospitals - The Design of TBase2", Bruch, Köckerling, Bouchard, Schug-Paß (Eds.): New Aspects of High Technology in Medicine. [BibTeX]
                                                • BibTeX:
                                                  @article{Lindemann2000,
                                                    author = {G. Lindemann and L. Fritsche and K. Schröter and A. Schlaefer and K. Budde and H. H. Neumayer},
                                                    title = {A Web-Based Patient Record for Hospitals - The Design of TBase2},
                                                    journal = {Bruch, Köckerling, Bouchard, Schug-Paß (Eds.): New Aspects of High Technology in Medicine},
                                                    year = {2000}
                                                  }
                                                  
                                                • F. Müller, J. Nolte, A. Schlaefer (2000), "CLIX - A Hybrid Programming Environment for Distributed Objects and Distributed Shared Memory", J. Rolim et al. (Eds.): Parallel and Distributed Processing - Workshop on High-Level Parallel Programming Models and Supportive Environments. ,285-292. [Abstract] [BibTeX]
                                                • Abstract: Parallel programming with distributed object technology becomes increasingly popular but shared-memory programming is still a common way of utilizing parallel machines. In fact, both models can coexist fairly well and software DSM systems can be constructed easily using distributed object systems. In this paper, we describe the construction of a hybrid programming platform based on the Arts distributed object system. We describe how an object-oriented design approach provides a compact and flexible description of the system components. A sample implementation demonstrates that three classes of less than 100 lines of code each suffice to implement sequential consistency
                                                  BibTeX:
                                                  @article{Mueller2000,
                                                    author = {F. Müller and J. Nolte and A. Schlaefer},
                                                    title = {CLIX - A Hybrid Programming Environment for Distributed Objects and Distributed Shared Memory},
                                                    journal = {J. Rolim et al. (Eds.): Parallel and Distributed Processing - Workshop on High-Level Parallel Programming Models and Supportive Environments},
                                                    year = {2000},
                                                    pages = {285-292},
                                                    note = {Springer LNCS 1800}
                                                  }
                                                  
                                                  Created by JabRef on 08/11/2024.