Nebiker, Stephan
E-Mail-Adresse
Geburtsdatum
Projekt
Organisationseinheiten
Berufsbeschreibung
Nachname
Vorname
Name
Suchergebnisse
AI-based 3D detection of parked vehicles on a mobile mapping platform using edge computing
2022, Meyer, Jonas, Blaser, Stefan, Nebiker, Stephan
In this paper we present an edge-based hardware and software framework for the 3D detection and mapping of parked vehicles on a mobile mapping platform for the use case of on-street parking statistics. First, we investigate different point cloud-based 3D object detection methods on our extremely dense and noisy depth maps obtained from low-cost RGB-D sensors to find a suitable object detector and determine the optimal preparation of our data. We then retrain the chosen object detector to detect all types of vehicles, rather than standard cars only. Finally, we design and develop a software framework integrating the newly trained object detector. By repeating the parking statistics of our previous work (Nebiker et al., 2021), our software is tested regarding the detection accuracy. With our edge-based framework, we achieve a precision and recall of 100% and 98% respectively on any parking configuration and vehicle type, outperforming all other known work on on-street parking statistics. Furthermore, our software is evaluated in terms of processing speed and volume of generated data. While the processing speed reaches only 1.9 frames per second due to limited computing resources, the amount of data generated is just 0.25 KB per frame.
Outdoor mobile mapping and AI-based 3D object detection with low-cost RGB-D cameras. The use case of on-street parking statistics
2021, Nebiker, Stephan, Meyer, Jonas, Blaser, Stefan, Ammann, Manuela, Rhyner, Severin Eric
A successful application of low-cost 3D cameras in combination with artificial intelligence (AI)-based 3D object detection algorithms to outdoor mobile mapping would offer great potential for numerous mapping, asset inventory, and change detection tasks in the context of smart cities. This paper presents a mobile mapping system mounted on an electric tricycle and a procedure for creating on-street parking statistics, which allow government agencies and policy makers to verify and adjust parking policies in different city districts. Our method combines georeferenced red-green-blue-depth (RGB-D) imagery from two low-cost 3D cameras with state-of-the-art 3D object detection algorithms for extracting and mapping parked vehicles. Our investigations demonstrate the suitability of the latest generation of low-cost 3D cameras for real-world outdoor applications with respect to supported ranges, depth measurement accuracy, and robustness under varying lighting conditions. In an evaluation of suitable algorithms for detecting vehicles in the noisy and often incomplete 3D point clouds from RGB-D cameras, the 3D object detection network PointRCNN, which extends region-based convolutional neural networks (R-CNNs) to 3D point clouds, clearly outperformed all other candidates. The results of a mapping mission with 313 parking spaces show that our method is capable of reliably detecting parked cars with a precision of 100% and a recall of 97%. It can be applied to unslotted and slotted parking and different parking types including parallel, perpendicular, and angle parking.
Performance evaluation of a mobile mapping application using smartphones and augmented reality frameworks
2020, Hasler, Oliver, Blaser, Simon, Nebiker, Stephan
In this paper, we present a performance evaluation of our smartphone-based mobile mapping application based on an augmented reality (AR) framework in demanding outdoor environments. The implementation runs on Android and iOS devices and demonstrates the great potential of smartphone-based 3D mobile mapping. The application includes several functionalities such as device tracking, coordinate, and distance measuring as well as capturing georeferenced imagery. We evaluated our prototype system by comparing measured points from the tracked device with ground control points in an outdoor environment with four different campaigns. The campaigns consisted of open and closed-loop trajectories and different ground surfaces such as grass, concrete and gravel. Two campaigns passed a stairway in either direction. Our results show that the absolute 3D accuracy of device tracking with state-of-the-art AR framework on a standard smartphone is around 1% of the travelled distance and that the local 3D accuracy reaches sub-decimetre level.
Development of a portable high performance mobile mapping system using the robot operating system
2018, Blaser, Stefan, Cavegn, Stefan, Nebiker, Stephan
The rapid progression in digitalization in the construction industry and in facility management creates an enormous demand for the efficient and accurate reality capturing of indoor spaces. Cloud-based services based on georeferenced metric 3D imagery are already extensively used for infrastructure management in outdoor environments. The goal of our research is to enable such services for indoor applications as well. For this purpose, we designed a portable mobile mapping research platform with a strong focus on acquiring accurate 3D imagery. Our system consists of a multi-head panorama camera in combination with two multi-profile LiDAR scanners and a MEMS-based industrial grade IMU for LiDAR-based online and offline SLAM. Our modular implementation based on the Robot Operating System enables rapid adaptations of the sensor configuration and the acquisition software. The developed workflow provides for completely GNSS-independent data acquisition and camera pose estimation using LiDAR-based SLAM. Furthermore, we apply a novel image-based georeferencing approach for further improving camera poses. First performance evaluations show an improvement from LiDAR-based SLAM to image-based georeferencing by an order of magnitude: from 10–13 cm to 1.3–1.8 cm in absolute 3D point accuracy and from 8–12 cm to sub-centimeter in relative 3D point accuracy.
Image-based reality-capturing and 3D modelling for the creation of VR cycling simulations
2021, Wahbeh, Wissam, Ammann, Manuela, Nebiker, Stephan, van Eggermond, Michael, Erath, Alexander
With this paper, we present a novel approach for efficiently creating reality-based, high-fidelity urban 3D models for interactive VR cycling simulations. The foundation of these 3D models is accurately georeferenced street-level imagery, which can be captured using vehicle-based or portable mapping platforms. Depending on the desired type of urban model, the street-level imagery is either used for semi-automatically texturing an existing city model or for automatically creating textured 3D meshes from multi-view reconstructions using commercial off-the-shelf software. The resulting textured urban 3D model is then integrated with a real-time traffic simulation solution to create a VR framework based on the Unity game engine. Subsequently, the resulting urban scenes and different planning scenarios can be explored on a physical cycling simulator using a VR helmet or viewed as a 360-degree or conventional video. In addition, the VR environment can be used for augmented reality applications, e.g., mobile augmented reality maps. We apply this framework to a case study in the city of Berne to illustrate design variants of new cycling infrastructure at a major traffic junction to collect feedback from practitioners about the potential for practical applications in planning processes.
Image-based orientation determination of mobile sensor platforms
2021, Hasler, Oliver, Nebiker, Stephan
Abstract. Estimating the pose of a mobile robotic platform is a challenging task, especially when the pose needs to be estimated in a global or local reference frame and when the estimation has to be performed while the platform is moving. While the position of a platform can be measured directly via modern tachymetry or with the help of a global positioning service GNSS, the absolute platform orientation is harder to derive. Most often, only the relative orientation is estimated with the help of a sensor mounted on the robotic platform such as an IMU, with one or multiple cameras, with a laser scanner or with a combination of any of those. Then, a sensor fusion of the relative orientation and the absolute position is performed. In this work, an additional approach is presented: first, an image-based relative pose estimation with frames from a panoramic camera using a state-of-the-art visual odometry implementation is performed. Secondly, the position of the platform in a reference system is estimated using motorized tachymetry. Lastly, the absolute orientation is calculated using a visual marker, which is placed in the space, where the robotic platform is moving. The marker can be detected in the camera frame and since the position of this marker is known in the reference system, the absolute pose can be estimated. To improve the absolute pose estimation, a sensor fusion is conducted. Results with a Lego model train as a mobile platform show, that the trajectory of the absolute pose calculated independently with four different markers have a deviation < 0.66 degrees 50% of the time and that the average difference is < 1.17 degrees. The implementation is based on the popular Robotic Operating System ROS.
Centimetre-accuracy in forests and urban canyons. Combining a high-performance image-based mobile mapping backpack with new georeferencing methods
2020, Blaser, S., Meyer, Jonas, Nebiker, Stephan, Fricker, L., Weber, D.
Advances in digitalization technologies lead to rapid and massive changes in infrastructure management. New collaborative processes and workflows require detailed, accurate and up-to-date 3D geodata. Image-based web services with 3D measurement functionality, for example, transfer dangerous and costly inspection and measurement tasks from the field to the office workplace. In this contribution, we introduced an image-based backpack mobile mapping system and new georeferencing methods for capture previously inaccessible outdoor locations. We carried out large-scale performance investigations at two different test sites located in a city centre and in a forest area. We compared the performance of direct, SLAM-based and image-based georeferencing under demanding real-world conditions. Both test sites include areas with restricted GNSS reception, poor illumination, and uniform or ambiguous geometry, which create major challenges for reliable and accurate georeferencing. In our comparison of georeferencing methods, image-based georeferencing improved the median precision of coordinate measurement over direct georeferencing by a factor of 10–15 to 3 mm. Image-based georeferencing also showed a superior performance in terms of absolute accuracies with results in the range from 4.3 cm to 13.2 cm. Our investigations showed a great potential for complementing 3D image-based geospatial web-services of cities as well as for creating such web services for forest applications. In addition, such accurately georeferenced 3D imagery has an enormous potential for future visual localization and augmented reality applications.
Open urban and forest datasets from a high-performance mobile mapping backpack. A contribution for advancing the creation of digital city twins
2021, Blaser, Stefan, Meyer, Jonas, Nebiker, Stephan
With this contribution, we describe and publish two high-quality street-level datasets, captured with a portable high-performance Mobile Mapping System (MMS). The datasets will be freely available for scientific use. Both datasets, from a city centre and a forest represent area-wide street-level reality captures which can be used e.g. for establishing cloud-based frameworks for infrastructure management as well as for smart city and forestry applications. The quality of these data sets has been thoroughly evaluated and demonstrated. For example, georeferencing accuracies in the centimetre range using these datasets in combination with image-based georeferencing have been achieved. Both high-quality multi sensor system street-level datasets are suitable for evaluating and improving methods for multiple tasks related to high-precision 3D reality capture and the creation of digital twins. Potential applications range from localization and georeferencing, dense image matching and 3D reconstruction to combined methods such as simultaneous localization and mapping and structure-from-motion as well as classification and scene interpretation. Our dataset is available online at: https://www.fhnw.ch/habg/bimage-datasets
Long-term visual localization in large scale urban environments exploiting street level imagery
2020, Meyer, Jonas, Rettenmund, Daniel, Nebiker, Stephan
In this paper, we present our approach for robust long-term visual localization in large scale urban environments exploiting street level imagery. Our approach consists of a 2D-image based localization using image retrieval (NetVLAD) to select reference images. This is followed by a 3D-structure based localization with a robust image matcher (DenseSfM) for accurate pose estimation. This visual localization approach is evaluated by means of the ‘Sun’ subset of the RobotCar seasons dataset, which is part of the Visual Localization benchmark. As the results on the RobotCar benchmark dataset are nearly on par with the top ranked approaches, we focused our investigations on reproducibility and performance with own data. For this purpose, we created a dataset with street-level imagery. In order to have independent reference and query images, we used a road-based and a tram-based mapping campaign with a time difference of four years. The approximately 90% successfully oriented images of both datasets are a good indicator for the robustness of our approach. With about 50% success rate, every second image could be localized with a position accuracy better than 0.25 m and a rotation accuracy better than 2°.
Implementation and first evaluation of an indoor mapping application using smartphones and frameworks
2019, Hasler, Oliver, Blaser, Stefan, Nebiker, Stephan
In this paper, we present the implementation of a smartphone-based indoor mobile mapping application based on an augmented reality (AR) framework and a subsequent performance evaluation in demanding indoor environments. The implementation runs on Android and iOS devices and demonstrates the great potential of smartphone-based 3D mobile mapping. The application includes several functionalities such as device tracking, coordinate, and distance measuring as well as capturing georeferenced imagery. We evaluate our prototype system by comparing measured points from the tracked device with ground control points in an indoor environment with two different campaigns. The first campaign consists of an open, one-way trajectory whereas the second campaign incorporates a loop closure. In the second campaign, the underlying AR framework successfully recognized the start location and correctly repositioned the device. Our results show that the absolute 3D accuracy of device tracking with a standard smartphone is around 1% of the travelled distance and that the local 3D accuracy reaches sub-decimetre level.