Updating Maps & Imagery

All maps need updating, and a concept like MapMerge will need to constantly updated. Sending out special equipment every year or two won’t cut it, as it could potentially cost too much, while users will be demanding that the system is up-to-date.

Fortunately we are entering the age of very cheap, very small, high-quality digital cameras, combined with high-speed data transmission, GPS and sophisticated algorithms. In other words, lots of cameras everywhere that can update the imagery on the fly.

  • Humans wearing NowSpecs
  • Robots and drones will by necessity have built-in cameras
  • Public transport
  • Private cars with cams made available on a permission basis
  • Fixed street and building cameras made available on a permission basis

The success of the system will automatically lead to fresh imagery. Imagine a city where 10,000 drones make deliveries every day, and each sends back the latest imagery from every step of their journey. Multiple image streams can be combined to aggregate reality and weed out temporary presences.

Public transport operates along fixed routes, so changes are easily determined.

NowSpecs can not only provide imagery, they can also determine the objects and views that people find most appealing. These can receive extra attention from the system.

Providers of imagery can be paid for what they provide, or receive a discount on their MapMerge access. Providers can be judged and rated on the quality and clarity of what they provide, lessening the opportunity for map-bombing.

The next evolution of cameras is 3D, in the direction of Microsoft’s Kinect. So not only can images be obtained, but spatial referencing can be included.

Ultimately streets will need to be re-crawled by authorised machines to keep the official map current, but that would perhaps be annually or less often.

From a systems viewpoint:

  • Authorised machines create 3D maps, with imagery
  • 3rd party cameras supply supplemental and up-to-date imagery
  • Recent imagery is aggregated with existing images to form a trustworthy whole
  • 3D structures within the mapping system are only updated due to overwhelming inputs from multiple sources – and noted to users as being tentative
  • 3D updates are verified by authorised machines in areas of high interaction
  • Authorised machines make a complete re-crawl on average of a year or two (guess)

So we end up with a highly educated guesstimate of reality presented to users of MapMerge.

Example, a public park in a popular tourist destination:

  • Authorised machines create 3D map, with imagery
  • A new statue is installed
  • Enough image sources concur that a new object has arrived
  • The map is updated, including images from multiple angles

Or say a billboard has a new advertisement each month. The system can recognise that it is a billboard and more readily make updates. (Or, for commercial purposes, any advertising not officially connected to the system could simply be blanked out).

Alternatively graffiti, if noticed to be removed regularly, could be ignored.

New signage for a business could be quickly become part of the map proper, while garbage wheelie bins that come out every Tuesday night can be ignored via machine learning.

Human Verification

Any such system is a target for abuse, but fortunately MapMerge will have humans constantly using it. People wearing NowSpecs could be employed to compare what they really see with the overlay their special glasses provide. They could be specifically hired for the task, or simply bored people on their daily walk to work keen to earn a dollar for their troubles.

Inside a Business

Example, a pub/bar:

The premises are officially mapped. The business allows avatars to visit, but not tele-presence robots. Most visiting humans will, of course, be wearing NowSpecs so they can see recent information and avatars. Their cameras can send imagery to the system. Fixed cameras in the bar can also contribute.

The system has designated the bar as being a place of high social interactivity. And the bar pays to be involved. So current imagery is prioritised.

Someone at home, using a VR system, can navigate to the bar. The structure, fixtures and fittings will be from the most recent  official mapping, augmented by tentative updates from aggregated imagery. However they will also see real-time people moving and interacting in the bar, as if they are really there. It will be a highly accurate representation of what it would be like to actually be there, and if they choose their avatar will be there, interacting with anyone wearing NowSpecs.

Meanwhile a person actually walking past the bar can use their NowSpecs (with permission) to see the same scene inside. Faces will be blurred, but otherwise they will get a sophisticated answer to their question – should I go in?