Traditional approaches to indoor mapping relied heavily on manual floor plan tracing or rule-based computer vision algorithms, which proved fragile when confronted with the wide variety of graphical representations used in architectural drawings. While Computer-Aided Design (CAD) floor plans in formats like DWG or DWF exist for most modern buildings, these detailed technical drawings are typically proprietary and inaccessible to the public. Mappers often work with low-quality images (JPEG or PDF format) of floor plans, necessitating manual digitization processes. RGB-D cameras, which capture both color and depth information, emerged as promising tools for 3D indoor scanning, though they face limitations including restricted range (typically less than 5 meters), sensitivity to lighting conditions, noisy point clouds at object edges, and computational demands for real-time processing. Automatic floor plan vectorization algorithms remain highly sensitive to image quality and graphical symbol variations, often requiring substantial manual editing even with state-of-the-art deep learning approaches.
To help address these challenges, researchers at UC Santa Cruz (UCSC) developed Semantic Interior Mapology (SIM). SIM’s Map Conversion Toolkit's grid-based tracing interface shifts aways from the more traditional corner-detection methods which fail on high-definition floor plans with thin interior walls and long exterior walls. Users first define horizontal and vertical grid lines that overlay wall segments, then selecting intersections as corners, the system ensures co-planar walls are represented by perfectly aligned collinear segments, which is a major improvement over manual drawing tools like Google My Maps or Mapbox Studio that result in connectivity errors and misalignments. SIM’s ability to handle diagonal walls through "ghost wall" lines while maintaining computational efficiency distinguishes it from methods requiring complex geometric preprocessing. SIM’s Map Population Toolkit's semi-automatic workflow for extracting furniture and fixtures from RGB-D scans addresses a fundamental gap, as most floor plans lack small-scale interior features. The toolkit's mesh orientation algorithm helps to overcome geometric distortion problems inherent in consumer-grade RGB-D cameras by using histogram analysis of normal vector angles combined with rectification through collineation transformation. The integration of superpixel generation for identifying planar surface patches on 3D meshes, combined with a web-based interface for manual object selection, creates an accessible workflow that non-experts can use without specialized training. Moreover, SIM’s end-to-end conversion from floor plan images to geo-registered GeoJSON files that can be rendered interactively in 3D using standard web tools like Mapbox GL JS.
| Country | Type | Number | Dated | Case |
| United States Of America | Issued Patent | 11,367,264 | 06/21/2022 | 2019-746 |
3D, 3D interactive, building interiors, building mapping, building maps, indoor mapping, indoor maps, RGB-D, depth-sensing camera, annotating floorplans, floorplans, geojson, mesh, floor plan, architectural