Nebiker, Stephan

Lade...
Profilbild
E-Mail-Adresse
Geburtsdatum
Projekt
Organisationseinheiten
Berufsbeschreibung
Nachname
Nebiker
Vorname
Stephan
Name
Nebiker, Stephan

Suchergebnisse

Gerade angezeigt 1 - 10 von 10
  • Publikation
    AI-based 3D detection of parked vehicles on a mobile mapping platform using edge computing
    (2022) Meyer, Jonas; Blaser, Stefan; Nebiker, Stephan [in: The international archives of the photogrammetry, remote sensing and spatial information sciences]
    In this paper we present an edge-based hardware and software framework for the 3D detection and mapping of parked vehicles on a mobile mapping platform for the use case of on-street parking statistics. First, we investigate different point cloud-based 3D object detection methods on our extremely dense and noisy depth maps obtained from low-cost RGB-D sensors to find a suitable object detector and determine the optimal preparation of our data. We then retrain the chosen object detector to detect all types of vehicles, rather than standard cars only. Finally, we design and develop a software framework integrating the newly trained object detector. By repeating the parking statistics of our previous work (Nebiker et al., 2021), our software is tested regarding the detection accuracy. With our edge-based framework, we achieve a precision and recall of 100% and 98% respectively on any parking configuration and vehicle type, outperforming all other known work on on-street parking statistics. Furthermore, our software is evaluated in terms of processing speed and volume of generated data. While the processing speed reaches only 1.9 frames per second due to limited computing resources, the amount of data generated is just 0.25 KB per frame.
    04B - Beitrag Konferenzschrift
  • Publikation
    Open urban and forest datasets from a high-performance mobile mapping backpack. A contribution for advancing the creation of digital city twins
    (International Society of Photogrammetry and Remote Sensing, 2021) Blaser, Stefan; Meyer, Jonas; Nebiker, Stephan [in: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences]
    With this contribution, we describe and publish two high-quality street-level datasets, captured with a portable high-performance Mobile Mapping System (MMS). The datasets will be freely available for scientific use. Both datasets, from a city centre and a forest represent area-wide street-level reality captures which can be used e.g. for establishing cloud-based frameworks for infrastructure management as well as for smart city and forestry applications. The quality of these data sets has been thoroughly evaluated and demonstrated. For example, georeferencing accuracies in the centimetre range using these datasets in combination with image-based georeferencing have been achieved. Both high-quality multi sensor system street-level datasets are suitable for evaluating and improving methods for multiple tasks related to high-precision 3D reality capture and the creation of digital twins. Potential applications range from localization and georeferencing, dense image matching and 3D reconstruction to combined methods such as simultaneous localization and mapping and structure-from-motion as well as classification and scene interpretation. Our dataset is available online at: https://www.fhnw.ch/habg/bimage-datasets
    04B - Beitrag Konferenzschrift
  • Publikation
    Outdoor mobile mapping and AI-based 3D object detection with low-cost RGB-D cameras. The use case of on-street parking statistics
    (MDPI, 2021) Nebiker, Stephan; Meyer, Jonas; Blaser, Stefan; Ammann, Manuela; Rhyner, Severin Eric [in: Remote sensing]
    A successful application of low-cost 3D cameras in combination with artificial intelligence (AI)-based 3D object detection algorithms to outdoor mobile mapping would offer great potential for numerous mapping, asset inventory, and change detection tasks in the context of smart cities. This paper presents a mobile mapping system mounted on an electric tricycle and a procedure for creating on-street parking statistics, which allow government agencies and policy makers to verify and adjust parking policies in different city districts. Our method combines georeferenced red-green-blue-depth (RGB-D) imagery from two low-cost 3D cameras with state-of-the-art 3D object detection algorithms for extracting and mapping parked vehicles. Our investigations demonstrate the suitability of the latest generation of low-cost 3D cameras for real-world outdoor applications with respect to supported ranges, depth measurement accuracy, and robustness under varying lighting conditions. In an evaluation of suitable algorithms for detecting vehicles in the noisy and often incomplete 3D point clouds from RGB-D cameras, the 3D object detection network PointRCNN, which extends region-based convolutional neural networks (R-CNNs) to 3D point clouds, clearly outperformed all other candidates. The results of a mapping mission with 313 parking spaces show that our method is capable of reliably detecting parked cars with a precision of 100% and a recall of 97%. It can be applied to unslotted and slotted parking and different parking types including parallel, perpendicular, and angle parking.
    01A - Beitrag in wissenschaftlicher Zeitschrift
  • Publikation
    Image-based orientation determination of mobile sensor platforms
    (International Society of Photogrammetry and Remote Sensing, 2021) Hasler, Oliver; Nebiker, Stephan [in: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences]
    Abstract. Estimating the pose of a mobile robotic platform is a challenging task, especially when the pose needs to be estimated in a global or local reference frame and when the estimation has to be performed while the platform is moving. While the position of a platform can be measured directly via modern tachymetry or with the help of a global positioning service GNSS, the absolute platform orientation is harder to derive. Most often, only the relative orientation is estimated with the help of a sensor mounted on the robotic platform such as an IMU, with one or multiple cameras, with a laser scanner or with a combination of any of those. Then, a sensor fusion of the relative orientation and the absolute position is performed. In this work, an additional approach is presented: first, an image-based relative pose estimation with frames from a panoramic camera using a state-of-the-art visual odometry implementation is performed. Secondly, the position of the platform in a reference system is estimated using motorized tachymetry. Lastly, the absolute orientation is calculated using a visual marker, which is placed in the space, where the robotic platform is moving. The marker can be detected in the camera frame and since the position of this marker is known in the reference system, the absolute pose can be estimated. To improve the absolute pose estimation, a sensor fusion is conducted. Results with a Lego model train as a mobile platform show, that the trajectory of the absolute pose calculated independently with four different markers have a deviation < 0.66 degrees 50% of the time and that the average difference is < 1.17 degrees. The implementation is based on the popular Robotic Operating System ROS.
    04B - Beitrag Konferenzschrift
  • Publikation
    Long-term visual localization in large scale urban environments exploiting street level imagery
    (Copernicus, 2020) Meyer, Jonas; Rettenmund, Daniel; Nebiker, Stephan [in: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences]
    In this paper, we present our approach for robust long-term visual localization in large scale urban environments exploiting street level imagery. Our approach consists of a 2D-image based localization using image retrieval (NetVLAD) to select reference images. This is followed by a 3D-structure based localization with a robust image matcher (DenseSfM) for accurate pose estimation. This visual localization approach is evaluated by means of the ‘Sun’ subset of the RobotCar seasons dataset, which is part of the Visual Localization benchmark. As the results on the RobotCar benchmark dataset are nearly on par with the top ranked approaches, we focused our investigations on reproducibility and performance with own data. For this purpose, we created a dataset with street-level imagery. In order to have independent reference and query images, we used a road-based and a tram-based mapping campaign with a time difference of four years. The approximately 90% successfully oriented images of both datasets are a good indicator for the robustness of our approach. With about 50% success rate, every second image could be localized with a position accuracy better than 0.25 m and a rotation accuracy better than 2°.
    01A - Beitrag in wissenschaftlicher Zeitschrift
  • Publikation
    Performance evaluation of a mobile mapping application using smartphones and augmented reality frameworks
    (2020) Hasler, Oliver; Blaser, Simon; Nebiker, Stephan [in: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences]
    In this paper, we present a performance evaluation of our smartphone-based mobile mapping application based on an augmented reality (AR) framework in demanding outdoor environments. The implementation runs on Android and iOS devices and demonstrates the great potential of smartphone-based 3D mobile mapping. The application includes several functionalities such as device tracking, coordinate, and distance measuring as well as capturing georeferenced imagery. We evaluated our prototype system by comparing measured points from the tracked device with ground control points in an outdoor environment with four different campaigns. The campaigns consisted of open and closed-loop trajectories and different ground surfaces such as grass, concrete and gravel. Two campaigns passed a stairway in either direction. Our results show that the absolute 3D accuracy of device tracking with state-of-the-art AR framework on a standard smartphone is around 1% of the travelled distance and that the local 3D accuracy reaches sub-decimetre level.
    01A - Beitrag in wissenschaftlicher Zeitschrift
  • Publikation
    Centimetre-accuracy in forests and urban canyons. Combining a high-performance image-based mobile mapping backpack with new georeferencing methods
    (Copernicus, 2020) Blaser, S.; Meyer, Jonas; Nebiker, Stephan; Fricker, L.; Weber, D. [in: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences]
    Advances in digitalization technologies lead to rapid and massive changes in infrastructure management. New collaborative processes and workflows require detailed, accurate and up-to-date 3D geodata. Image-based web services with 3D measurement functionality, for example, transfer dangerous and costly inspection and measurement tasks from the field to the office workplace. In this contribution, we introduced an image-based backpack mobile mapping system and new georeferencing methods for capture previously inaccessible outdoor locations. We carried out large-scale performance investigations at two different test sites located in a city centre and in a forest area. We compared the performance of direct, SLAM-based and image-based georeferencing under demanding real-world conditions. Both test sites include areas with restricted GNSS reception, poor illumination, and uniform or ambiguous geometry, which create major challenges for reliable and accurate georeferencing. In our comparison of georeferencing methods, image-based georeferencing improved the median precision of coordinate measurement over direct georeferencing by a factor of 10–15 to 3 mm. Image-based georeferencing also showed a superior performance in terms of absolute accuracies with results in the range from 4.3 cm to 13.2 cm. Our investigations showed a great potential for complementing 3D image-based geospatial web-services of cities as well as for creating such web services for forest applications. In addition, such accurately georeferenced 3D imagery has an enormous potential for future visual localization and augmented reality applications.
    01A - Beitrag in wissenschaftlicher Zeitschrift
  • Publikation
    Development of a portable high performance mobile mapping system using the robot operating system
    (Copernicus, 2018) Blaser, Stefan; Cavegn, Stefan; Nebiker, Stephan [in: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences]
    The rapid progression in digitalization in the construction industry and in facility management creates an enormous demand for the efficient and accurate reality capturing of indoor spaces. Cloud-based services based on georeferenced metric 3D imagery are already extensively used for infrastructure management in outdoor environments. The goal of our research is to enable such services for indoor applications as well. For this purpose, we designed a portable mobile mapping research platform with a strong focus on acquiring accurate 3D imagery. Our system consists of a multi-head panorama camera in combination with two multi-profile LiDAR scanners and a MEMS-based industrial grade IMU for LiDAR-based online and offline SLAM. Our modular implementation based on the Robot Operating System enables rapid adaptations of the sensor configuration and the acquisition software. The developed workflow provides for completely GNSS-independent data acquisition and camera pose estimation using LiDAR-based SLAM. Furthermore, we apply a novel image-based georeferencing approach for further improving camera poses. First performance evaluations show an improvement from LiDAR-based SLAM to image-based georeferencing by an order of magnitude: from 10–13 cm to 1.3–1.8 cm in absolute 3D point accuracy and from 8–12 cm to sub-centimeter in relative 3D point accuracy.
    01A - Beitrag in wissenschaftlicher Zeitschrift
  • Publikation
    Accurate visual localization in outdoor and indoor environments exploiting 3D image spaces as spatial reference
    (International Society for Photogrammetry and Remote Sensing, 2018) Rettenmund, Daniel; Fehr, Markus; Cavegn, Stefan; Nebiker, Stephan [in: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences]
    In this paper, we present a method for visual localization and pose estimation based on 3D image spaces. The method works in indoor and outdoor environments and does not require the presence of control points or markers. The method is evaluated with different sensors in an outdoor and an indoor test field. The results of our research show the viability of single image localization with absolute position accuracies at the decimetre level for outdoor environments and 5 cm or better for indoor environments. However, the evaluation also revealed a number of limitations of single image visual localization in real-world environments. Some of them could be addressed by an alternative AR-based localization approach, which we also present and compare in this paper. We then discuss the strengths and weaknesses of the two approaches and show possibilities for combining them to obtain accurate and robust visual localization in an absolute coordinate frame.
    04B - Beitrag Konferenzschrift
  • Publikation
    Robust and accurate image-based georeferencing exploiting relative orientation constraints
    (Copernicus, 2018) Cavegn, Stefan; Blaser, S.; Nebiker, Stephan; Haala, N. [in: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences]
    Urban environments with extended areas of poor GNSS coverage as well as indoor spaces that often rely on real-time SLAM algorithms for camera pose estimation require sophisticated georeferencing in order to fulfill our high requirements of a few centimeters for absolute 3D point measurement accuracies. Since we focus on image-based mobile mapping, we extended the structure-from-motion pipeline COLMAP with georeferencing capabilities by integrating exterior orientation parameters from direct sensor orientation or SLAM as well as ground control points into bundle adjustment. Furthermore, we exploit constraints for relative orientation parameters among all cameras in bundle adjustment, which leads to a significant robustness and accuracy increase especially by incorporating highly redundant multi-view image sequences. We evaluated our integrated georeferencing approach on two data sets, one captured outdoors by a vehicle-based multi-stereo mobile mapping system and the other captured indoors by a portable panoramic mobile mapping system. We obtained mean RMSE values for check point residuals between image-based georeferencing and tachymetry of 2 cm in an indoor area, and 3 cm in an urban environment where the measurement distances are a multiple compared to indoors. Moreover, in comparison to a solely image-based procedure, our integrated georeferencing approach showed a consistent accuracy increase by a factor of 2–3 at our outdoor test site. Due to pre-calibrated relative orientation parameters, images of all camera heads were oriented correctly in our challenging indoor environment. By performing self-calibration of relative orientation parameters among respective cameras of our vehicle-based mobile mapping system, remaining inaccuracies from suboptimal test field calibration were successfully compensated.
    01A - Beitrag in wissenschaftlicher Zeitschrift