The Global Navigation Satellite System (GNSS) consists of a family of satellite navigation systems (like GPS, Galileo, GLONASS, BeiDou) which provide global positioning and navigation from orbiting satellites. GNSS is one of the major inputs for phone location. Accurate pedestrian localization in “urban canyons” has long been limited by GNSS multipath errors and blocked line-of-sight, especially for blind and low-vision pedestrians who need sidewalk-level accuracy. GNSS-based positioning in dense downtowns is often limited to tens of meters off because skyscrapers block satellites, create multipath, and reduce signal quality, leading to especially large errors that make it hard to know which side of a street a pedestrian is on. For blind and low‑vision users, conventional smartphone navigation (pure GNSS, camera‑based visual positioning system, or beacon infrastructure) does not offer reliable, hands‑free, street‑side-accurate guidance. Most accuracy-focused approaches to date require detailed 3D models, specialized hardware, and/or substantial map annotations, limiting the scalability across urban environments and challenging mainstream apps deployment. Moreover, for blind and low‑vision pedestrians, integrating precise localization with usable, low‑attention interaction (i.e., no constant camera use, minimal screen looks) and robust crossing guidance is still a problem.