Ryan Ahola: Great, thanks Ted.
Ryan Ahola: So thank you everyone for joining us this morning or afternoon or evening, whatever time it is where you are. So this is our
Ryan Ahola: Week 2 Day 2 session for the maps for the work- the maps for the web workshop. My name is Ryan Ahola
Ryan Ahola: I'm a member of of Natural Resources Canada and I'm part of the program committee that's been helping with the with organizing, organizing this workshop
Ryan Ahola: So today we have a few different sessions, starting with the first presentation about creating an accessible by map widgets and then we'll have a panel discussion
Ryan Ahola: About the same topic, then we'll move into presentations on 3D maps, 3D map display and also looking at followed up by panel session on maps and augmented reality and then we'll close with a breakout session on
Ryan Ahola: GeoPose with maps of objects, GeoPose for web maps.
Ryan Ahola: I think Peter posted the link to the Gitter chat in the chat for zoom. So if you can access that, and if there's, i,f you have any questions or any discussion of course these are running Gitter chat
Ryan Ahola: Online. And I think that's all the logistical information today, so I guess we'll start with the presentation from Nicolò
Ryan Ahola: I'm going to mess up your last name. I'm sorry but Nicolò Carpignoli & Joshue O Connor,
Ryan Ahola: About accessible web map widgets specifically, completed an introduction for research, from the research questions Accessibility Task Force, the Accessible Platform Architectural Working Group.
Ryan Ahola: And I believe this is being done through a video which I've set up here, so I'll make sure that I'll share my screen. And we'll make sure that we're looking at the right one.
Ryan Ahola: Of course, that doesn't work when you want it to work.
Hi everybody, my name is Nicolò, and today I'm going to talk about accessible annotations for a native map viewer for the web platform. I'm going to show you user needs and requirements.
And from the W3C APA, I'm an expert, and the first of all I'd like to thank Joshue O Connor from the W3C that has helped me a lot with this presentation and all the other members of the RQTF from the APA W3C
So today we're going to talk about accessibility and maps as you know map so use every day to find the rules to navigate, provide other information and all other
Very important user needs. Maps can be either complex or simple objects and according to the user needs,
They are configurable so users, most of the time is reach between differently layers in order to see different data from, for the same geographic area. So as you can imagine, in general, maps are complex object and from an accessibility perspective they represent a challenge, but
We have to start from one aspect and maybe the most important, is the need for annotations.
We talk about annotations of geolocation data and maps meta data. Why are they so important for accessibility.
Tony Stockman: Because once those annotations are specified and then delivered, they become available to be read from machines or from humans.
Tony Stockman: For different outputs such as non-visual outputs like speech synthesis or like symbols and this is very important because text annotations is the most portable format and the only one that can be translated into other forms.
So we start from the text annotations and then we can deliver the data and the information for different users.
Nicholas Giudice: We have grouped all these use cases and requirements into four main fields regarding use cases for maps: information retrieval, navigation, comparing and monitoring.
Nicholas Giudice: And we're going to start from the first one. But first of all, we like to state that the following user needs are neither exhaustive, nor definitive.
Nicholas Giudice: They represent a starting point to begin to orientate towards user needs and potential requirements. So it's a kind of a way to start to think about accessibility for a map object for web.
So user needs for information retrieval. We are going to start from the first user needs and the first two requirements. Okay, so the first user need: a student wants to learn the boundaries of a certain geographical area.
We have two requirements to solve these different regions must be labeled with text and other metadata and each place of interest, for example, like cities should be available with labels that should contain not only the place of interest, but also the region where it belongs.
In this user need and all other presented in this presentation, meta data maybe area or other HTML attributes. So this is to give some kind of nomenclature for meta data.
The second user need: a researcher wants to analyze the geographical area, using different zoom levels and same measurements so
Users should be able to configure their experience on reading the map. This is very important because different user might need the different scale unit and other meta data in order to read and to
Understand the data that are on the map. And so scale unit and other meta data should be available as labels on the map context, of course, they also must be editable using available controls on the map.
User Needs 1.3: a user wants to study only the rivers and the lakes of an area without being distracted by other data. This is very important because
As we said before, data are very complex object, and so most of the times user just want to read only certain data rather than all the data that are available on the maps and so
As a requirement for this user need mess different data must be available as layers that can be switched on and off by the users, according to their user needs and the active layers must be available as text labels so they are
They can be found easily by users, while looking into the map.
Now we're going to talk about user needs for navigation.
We start with this user need. A user with low vision wants to highlight the route between two places. We have one requirement that might need to be solved with in order to
To solve this particular user need, the distance between places must be available as a text label. This is very important because when a user wants to navigate towards
A specific destination. One of the most important information that he or her needs is the distance. And so if the user is using a real time navigation mode.
That is very common. So think about, we are we are a user that wants to navigate towards the navigation, and we are moving while looking into the map. So we are using
Real time or near real time navigation mode. In this case, the distance should be updated because the position of the user is updating and these updates should be presented to the user with text labels and should contain information about his new position.
Another user needs about maps and navigation: a user who finds orientation difficult wants to navigate to the destination. This is a very common user need
It for real time navigation mode is enabled, as we said on the user needs before
The direction to follow must be available as text labels and every update or change must be presented to the user.
But as we know, not every user find himself comfortable on using
Navigation orientation while looking into the map. And while
Walking, for example, on the real world. Sometimes these might be distracting or not easy to understand and so
Most of the time, if real time navigation mode is enable the user needs direction that should be available as visual hints like arrows. So not only like as text labels, as text information.
Of course, is if the user is using these visual hints,
It should be presented also proper alternative text so
Alternative text with visual hints and also text labels for
In order to navigate towards our destination is very important to be provided.
In this case, if using visual, hence the alternative texts will need to be dynamically driven and accurate and also updated in near real time. This is very important because not every user find himself comfortable in using text,
Text navigation information.
And other important user needs about navigation can be this one: A wheelchair user wants to know the estimated time needed to move toward a certain destination, following the route on the map.
The estimated travel time is, as we said for the distance and other important information while using maps for navigation and the important thing here is that estimated travel time is strictly bounded to the travel mode and so estimated travel time must be available as text labels.
And also the estimated travel time must be available for different travel modes. This is important as, for example, Google Maps that does
To present the different travel modes with different estimated travel times. And the important thing here is also to present to the user, only those travel modes that are concrete, concretely available, really available on that path for that destination. So,
If the map presents estimated travel time according to travel modes that are really available for that path, the user has all the information that he needs and in order to know
How to navigate to that destination using the map.
This is a very common user needs
For maps. So now we are going to talk about user needs for comparing.
A journalist wants to study data about the spread of a virus on a larger geographic area. He wants to quickly understand the relevant data
And spot the differences between countries. Also, this one is very common. Because think about maps using for magazines or newspapers.
These maps should be easily understandable, understandable for a very large number of users and so numeric data should be available as text labels placed over the map. And not only just with visual highlighting
This is important because a very important rule for accessibility in general is not to rely on the colors, not to rely on only on the symbols on shapes and sizes.
Because users might may not be able to spot differences on colors and on sizes easily. For example, and each label should contain the value, the unit, and related count. So all the data that the user needs
To understand, in order to have, to understand the big picture.
And other requirement if numeric data are shown as we said before using color or symbols, a legend must be available as text annotation
On the map context or near the map context. And this is important because if we are using not text data, so visual highlighting of some kind. We have to provide a legend in order to understand those symbolism, those coloring and etc.
Another example, strictly related to the prior one. A blind student wants to know the demographic data of a certain area specifying on the map.
In this case we have, we are also talking about to numerically at that but we are talking about of blind student, blind user and
Think about this blind user that wants to navigate all those data over the map that are yes, are
Shown with text labels, but they are a lot. They are very large number of information on the map, in the map. And so the user should navigate
One by one, and this is very difficult. It's, it would become a very bad experience or very bad user experience. And so in this particular cases when we have a large number of information as numeric data on the map.
It should be provided an alternative mode in order to read those data and
With alternative mode, here we have an example. Numerical data maps are usually shown with different colors, graduations or with symbols, for example,
For different sizes. Think about the city with one thousand people that is a highlighted with a big, red circle and instead think about another one with 10 people that is shown with a tiny dot
Not every user may prefer or may be able to spot differences. So people who find difficult to different shades, colors, boundaries and symbols can retrieve the same data using alternative mode example. For example, a table.
So it's very important to provide both data view, and let the user decide which one to use according to his user needs.
Last but not least, we are talking about monitoring on maps and we have here a user need that.
Summarize all the other requirements that we talked about before this. A user wants to analyze how real time data are changing on a geographic area shown on the map. So this is a very complex example user need because it contains
A lot of numeric data, real time updates and so the requirements are the following: data must be available on the map as text labels for other metadata.
That should be updated automatically if the feature is switched on because not every user may like the automatic updates of information, but they rather wants maybe to update with a refresh or with a specific
Also data should be available with another alternative mode because we are talking about a complex data, a lot of numeric and complex data, and for example a table should be provided, and should be very useful for
A lot of users in order to read those data and be constantly updated about those informations
So I thank you again for the attention. Thank you Joshue O Connor and all the RQTF theme and thanks everybody. I hope I give you a first overview about maps and accessibility and the need for annotations that are you, it would be would be a start
in order to think about more about accessibility on maps on the context of the web platform. Thank you, everybody. Have a nice day.
Ryan Ahola: Great.
Ryan Ahola: Okay. Excellent. Thank you Nicolò, that was another great example of the, the importance of accessibility when we're talking about maps for the web. I just wanted to ask if if there's anything that you wanted to add
Ryan Ahola: Now, just before we transition into into our panel discussion on the topic.
Nicolò Carpignoli: Oh no, I thank you everybody for the attention. I am starting to see some comments, some nice comments and very interesting topics on the Gitter chat so I'm ready to answer and to follow the discussion.
Ryan Ahola: Great, thanks a lot.
Ryan Ahola: So I guess I guess on that, um,
Ryan Ahola: I think we can transition straight into our panel discussion because it's on the same topic. And maybe that will be an opportunity for us to to have more discussion, ask them questions.
Ryan Ahola: So maybe with that, I'll transfer it over to Gobe Hobona from the Open Geospatial Consortium, who's going to be moderating this panel to introduce the panelists, so I'm
Doug Schepers: Actually, I should, we should make sure that everyone's here.
Gobe Hobona (OGC): Okay.
Gobe Hobona (OGC): So yeah, Ryan. Thanks for that introduction. So let me just quickly check, o we have Doug
Gobe Hobona (OGC): Doug Schepers,
Gobe Hobona (OGC): Brandon Biggs, Tony Stockman and Nicholas Giudice?
Gobe Hobona (OGC): Are we all on the call.
Doug Schepers: Nicholas. Are you there?
Nicholas Giudice: I'm here again.
Doug Schepers: Okay, great.
Gobe Hobona (OGC): Alright, we're all here.
Gobe Hobona (OGC): That's, that's great. Okay.
Gobe Hobona (OGC): Good day everyone. So my name is Gobe Hobona, I work for the OTC the Open Geospatial Consortium and I'm going to be moderating the panel discussion over the next 30 minutes so
Gobe Hobona (OGC): We're honored to have a panel of experts discussing with us today. We have Doug Schepers from Fizz Studio
Gobe Hobona (OGC): We have Brandon Biggs from Audiom and Brandom also works for this Smith Kettlewell Eye Research Institute, and we have Dr. Tony Stockman from Queen Mary University
Gobe Hobona (OGC): In London, and Dr. Nicholas Giudice from the University of Maine.
Gobe Hobona (OGC): Welcome.
Gobe Hobona (OGC): So,
Gobe Hobona (OGC): Let me first of all, just give you an overview of our panelists biographies, Dough Schepers is the founder and director of Fizz Studio, a accessible data visualization startup in Chapel Hill.
Doug Schepers: North Carolina.
Gobe Hobona (OGC): USA beuse Dog, sorry? North Carolina. All right.
Gobe Hobona (OGC): Previously Doug spent a decade defining standards as a technical product manager at W3C. Welcome Doug.
Gobe Hobona (OGC): Brandom Biggs is currently an engineer at the Smith Kettlewell Eye Research Institute working on non visual
Gobe Hobona (OGC): Map epresentations. He graduated with his masters and inclusive design in May 2019 from OCAD University, where he has thesis was on designing accessible non visual maps. He's blind web developer who has had enough vision to see a map but not enough to use it.
Gobe Hobona (OGC): The main focus of his research has been on digital audio maps, in particular, maps that have been developed through the natural laboratory of audio games, he's the creator of Audiom, the multimodal map web, sorry, web component that can be embedded into web pages. Welcome Brandon,
Gobe Hobona (OGC): Also joining us is Dr. Tony Stockman Senior Lecturer in the cognitive science research group and center for this one music in the School of electronic engineering and computer science at Queen Mary University of London.
Gobe Hobona (OGC): His commercial experience includes working as a systems programmer at Rolls Royce and as a Systems Analyst at ICI in Manchester.
Gobe Hobona (OGC): He has over 100 peer reviewed publications on interaction design and auditory displays and develop the tutor for editable audio maps with this research student as lonely.
Gobe Hobona (OGC): Tony's an emeritus board member of the international community for auditory display having served as president from 2011 to 2016. Welcome Tony.
Gobe Hobona (OGC): Finally, we've got Dr. Nicholas Giudice, a professor of secial informatics in the School of Computing and Information Science at the University of Maine.
Gobe Hobona (OGC): He's worked for 20 years and they have multimodal special cognition and the design and evaluation of accessible special displays, including maps.
Gobe Hobona (OGC): His primary focus on accessibility for blind and visually impaired people, or sighted folks in
Gobe Hobona (OGC): eyes-free situations. He has published broadly and is on the board of two accessible journals. His recent years, sorry, in recent years, he pioneered the development of vibro audio maps,
Gobe Hobona (OGC): Then, which are multimodal maps that are rendered on the types of smart devices such as phones and tablets. Welcome Nicholas,
Gobe Hobona (OGC): So just before we get started with the questions, so Nick is an expert in vibro audio maps, and, Tony and Brandon specialize in digital auditory maps,
Gobe Hobona (OGC): So I think it's a great opportunity for us to ask them about these concepts, what these map types are, so let's start off with, let's start off with Nick, Nick.
Gobe Hobona (OGC): In one minute. Could you just give us an overview of what vibro audio map are please.
Nicholas Giudice: Sure. So Vibro audio maps. Um, can you hear me.
Doug Schepers: Yes, we're good.
Nicholas Giudice: So vibro audio maps are maps that are rendered on the touchscreen of a smart device. So it's a visual map that uses combinations of vibration, audio and kinesthetic movement. So you might say, well, there's no touch on a touchscreen. I mean, a touchscreens flat.
Nicholas Giudice: But it turns out if you, if you render the map and you're having someone move their hand around
Nicholas Giudice: When they touch a visual element you synchronous synchronous they trigger the vibration motor
Nicholas Giudice: At that x y location or auditory cues and the perception is one of feeling a line or a point or region and they can move their hand around and explore that to learn the map. And it turns out it works really well for people to,
Nicholas Giudice: Yeah, to learn different types of graphics and just very quickly. You may say, Well, why use touch versus audio or language.
Nicholas Giudice: And they all can work. But the beauty of touches that it's most similar to vision and is a non visual, of the non visual senses, it's closest to vision, especially for spatial information.
Nicholas Giudice: Like maps, so brain processes visual and tactile spatial cues
Nicholas Giudice: Very similarly. We use different areas, but the computation done by different by neurons in those areas is
Nicholas Giudice: Very similar. So when you want to convey spatial information, it's way easier to do it using touch, then, for instance, to try to describe a map with language, which can work but but takes more cognitive load.
Nicholas Giudice: And so there's, there's advantages, especially when we're doing thinking about this as a multimodal way, they can work with other people's audio renderings but are adding touch.
Nicholas Giudice: And it just kind of fits with how the brain processes. So it's a bio-inspired interface, in that brain is inherently multimodal and so more devices and more interfaces, especially non visual ones, to convey that convey that same information need to also be multimodal.
Nicholas Giudice: And I could go on for a long time, but I won't
Nicholas Giudice: Because I think Gobe will cut me off.
Gobe Hobona (OGC): [laughs] Thanks, Nicholas for that overview
Gobe Hobona (OGC): Okay, Tony,
Gobe Hobona (OGC): Would you like to just give us a one minute overview of what digital auditory maps are from your perspective.
Tony Stockman: Or I mean, in a sense, you could say they are a subset of vibro or tactile auditory maps. We tended to focus on audio,
Tony Stockman: And, not least because
Tony Stockman: Obviously sound cards come with most, most PCs and we wanted something that was going to be sort of globally accessible.
Tony Stockman: The trick with designing these is how you're going to use different types of sound. So, our requirements investigations talking to visually impaired users,
Tony Stockman: I've generally found that the really key information if you are restricting yourself to the audio mode of presentation,
Tony Stockman: really needs to be delivered with speech. So things like distance and information about turn by turn directions, really needs to be given using speech, but
Tony Stockman: There is some real value to be had by leveraging our ability to hear other sounds at the same time as speech, and make use of it. And, really those non speech sounds come into two categories, one is auditory icons. Auditory icons are the audio equivalent of icons.
Tony Stockman: They use representative sounds to present landmarks or points of interest along the way. So in systems we've built. We had sounds to represent things like schools, parks, swimming pools, cafes, restaurants, and so on.
Tony Stockman: And the trick there is to try to choose auditory icons that are meaningful to most people wherever they happen to live and that can be quite tricky.
Tony Stockman: And and we've had lots of sort of prototyping sessions, where we've worked through these with different groups of people.
Tony Stockman: And then the other type of non speech sounds are earcons. These are more abstract sounds and they're abstract because you particularly don't want them to be confused with auditory icons. You don't want people mistakenly to think that they represent a point of interest. So they are deliberately
Tony Stockman: Not natural sounding particularly but they can be used to convey other bits of information about
Tony Stockman: About the route.
Tony Stockman: Typically, possibly, things like how many, how many degrees are associated with a particular turn. I don't mean literally down to one degree, obviously, but maybe
Tony Stockman: Breaking down as 360 into say chunks of 45 or 60 maybe
Tony Stockman: So earcons are an important contributing source of sound in auditory maps and overall
Tony Stockman: The skill in developing a good orange tree map is how to choose the sounds that are going to be most effective and balancing these different types of sound so that they give an engaging and usable experience to people using the map.
Doug Schepers: Gobe, would you mind if I had 30 seconds. Those are both excellent explanations, but do you mind if I give 30 seconds to people who may not be familiar with accessibility in general, how blind people, for example, will use screen screen reader.
Doug Schepers: Very briefly,
Doug Schepers: The two techniques that they talked about, are supplemental in some some ways. In contrast, so with what most people, with the typical modality for browsing the web.
Doug Schepers: That a blind person will use, which is by screen reader, where a screen reader is a piece of software, if you're not familiar with it, the screen here is a piece of software that
Doug Schepers: It's a bit like an audio book but interactive. So you can move around, you can find, you can search, you can find things you can find the table of contents of a page.
Doug Schepers: It announces links, it announces alt text for images, things like this. But that is for textual content you can apply that to visual content, for example.
Doug Schepers: Reading out the image, the alt text of an image, but we have complex images, like maps, things that or data visualizations charts, etc.
Doug Schepers: That you're meant to
Doug Schepers: That you're meant to actually visually explore. That's where the techniques, that in addition to hearing the speech things, that's where these vibro tactile and and auditory maps really shine. So a combination of these things all of these techniques is really
Doug Schepers: What's sought here and I also, one thing I want to bring out is the techniques that they're talking about aren't necessarily going to be things that need to be
Doug Schepers: enabled by the, or specifically enabled by the authors of the map. They might be things that
Doug Schepers: Are supplemental, that are additional pieces of software. It says assistive technology.
Doug Schepers: That goes on top of a map that is properly annotated as per the last presentation.
Doug Schepers: And is properly marked up in such a way that it can be consumed and transformed into these modalities, right, it can enhance these modalities, it can enable these modalities.
Doug Schepers: So you don't necessarily need to make your maps audio you know vibratory or audio tactile, but you should make your maps amenable to you to to using those techniques.
Gobe Hobona (OGC): Okay, thanks Doug.
Gobe Hobona (OGC): Brandon,
Gobe Hobona (OGC): Any chance you could give us just a one minute minute or so. Overview of digital auditory maps from your
Gobe Hobona (OGC): From your perspective.
Brandon Biggs: Yeah, absolutely. So there are two different types and Tony gave a really good summary of digital auditory maps. But I think there are two two different
Brandon Biggs: Like overview types. There's interactive auditory maps. So those are the ones where you are.
Brandon Biggs: Having like an input device and you're moving around the map, very much like a game, like World of Warcraft or something like that. Then there are
Brandon Biggs: presentational maps. And so those are like recordings and Tony's been working on these specifically and basically listen to, like, an overview of a map.
Brandon Biggs: And you can open different turn by turn information or you get information about the map as you listen to it. And so the only interaction you have there is if you fast forward and rewind.
Brandon Biggs: And so I think those are the two main types of maps and they each present different information, but in my research
Brandon Biggs: The interactive maps have been what you can present information such as shapes, routes, it getting route landmark and survey knowledge.
Brandon Biggs: Very quickly, and I think surveys are what the presentational maps are really, really good at. So I think you need kind of a mixture of both. And so, but you can have both which is what's really nice about auditory maps.
Gobe Hobona (OGC): Okay.
Gobe Hobona (OGC): Thanks, Brandon. Okay, so
Gobe Hobona (OGC): Let's move on to the next question. So with all these approaches to creating accessible, you know, auditory maps.
Gobe Hobona (OGC): What is the biggest challenge that you see standards development organizations needing to face but not just our development organization, but also browser developers. What's the biggest challenge that these organizations will encounter when they attempt to
Gobe Hobona (OGC): You know, develop standards around those types of maps.
Gobe Hobona (OGC): Shall we start with, let's start with Doug
Doug Schepers: Oh, um, I think I sort of summarize part of what
Doug Schepers: My goal there, which is to say, I think it was excellent presentation with, several excellent presentations on extending the web to do different things
Doug Schepers: around accessibility, in terms of maps, throughout this entire workshop. I really liked the annotation when that was that immediately preceded this panel. I recommend people watch that.
Doug Schepers: And
Doug Schepers: I think that we have, we're fortunate now, we've got a pretty rich platform.
Doug Schepers: In order to in, for example, we can enable vibratory maps. I don't want to step on Tony's toes, but we can enable vibratory maps, because we have the web vibration API.
Doug Schepers: We can enable the auditory maps, because we have the Web Audio API. So the, some of the base features that are needed to
Doug Schepers: To enable the modalities that we're talking about are already there. I think that
Doug Schepers: One of the next steps is for us to have standard ways of representing particular things, whether that's an ontology, or it just loosely based on text basically being able to say that something is a restaurant and from knowing that it's a restaurant and knowing
Doug Schepers: The geometry, etc. of an area you can actually work out client side.
Doug Schepers: You can enable a lot of features client side. I think it's going to be a combination of web standards features that are already exist.
Doug Schepers: Perhaps either ontologies or vocabularies like ARIA specifically around maps.
Doug Schepers: And then probably a lot of of experimentation by pioneers like these guys who are
Doug Schepers: Who are
Doug Schepers: Really, making it work on the client side and then probably that you would need to be folded back into standards going forward. I think it has to be a combination of of standards plus innovation in the in software.
Gobe Hobona (OGC): Okay. All right. Thanks, Doug. Okay, Brandon.
Gobe Hobona (OGC): Why don't you just share your thoughts on what the biggest challenges that standards development organizations and browser developers will encounter.
Brandon Biggs: Yeah, I think there's three things. One is the data that needs to be a lot more work with data. The second is the APIs. There's some future APIs like with XR
Brandon Biggs: That are coming out and we need to be thinking about those and how we can represent maps in that way. And the third is getting the users involved in whatever in whatever we make. So whether it's, you know, maps of these
Brandon Biggs: Accessible maps, you know, getting people who build these accessible maps involved with, you know, how the data should be represented, or if we're actually building these interfaces. We need the users to be experiencing those for the data. Specifically, I think there should be like a name,
Brandon Biggs: Every single one of these interfaces that we've talked about requires a name attribute or a label attribute very similar to what the guy in the last presentation was talking about.
Brandon Biggs: And that's just super critical. And then another big issue that we need to be considering is raster data.
Brandon Biggs: or raster data because it's extremely difficult right now for us to represent raster data in any vibro or audio maps, because it's just a picture, and pictures are completely useless for computers and what we're doing essentially is translating
Brandon Biggs: The data and using the computer into a different modality. So pictures are inherently
Brandon Biggs: Visual. And so if we can get the underlying data from that picture into something more useful. That's going to make the
Brandon Biggs: Digital auditory maps and vibrohaptic maps more useful.
Brandon Biggs: It'll actually be able to use that data.
Brandon Biggs: It'd be like using recording and make a picture of the recording. So we need to figure out a way to do is make raster information into some sort of
Brandon Biggs: Vector, getting some sort of vector geometries. And then for the APIs, there's like
Brandon Biggs: These tactile gloves that are coming out that should be in the next couple years. And those will allow for 3D model maps
Brandon Biggs: Which we don't have right now or raised line maps on like a piece of paper virtually and so we need to be thinking about having APIs for these types of peripherals for XR
Brandon Biggs: And it won't just be for maps. It'll be for a whole bunch of other things too, but maps will definitely be part of that and
Brandon Biggs: Then we just, we just need to have lots of testing. So those are what I think are the three things that we need to be thinking about for going forward with these these interfaces. Okay.
Gobe Hobona (OGC): All right. Thank you.
Gobe Hobona (OGC): Tony, do you just want to respond to that question, please.
Tony Stockman: Yeah, and I think from my work and my point of view, one of the real problems, actually is the lack of standards for auditory displays in general.
Tony Stockman: I have been involved with the international community for auditory displays for quite a while now and there simply are not standards for the way we represent things in audio, there's some things that are known to work relatively well
Tony Stockman: Under a guidelines for some specific elements. For example, the earons that I talked about towards the end of my
Tony Stockman: Minute just now.
Tony Stockman: But in general, there's a real shortage of standards. And I think that's really a challenge for the auditory display community to to try to address.
Tony Stockman: And touching on the presentation we had on accessible maps which I agree was was excellent. I, I think as well, there is that and a little bit about what Brandon said about
Tony Stockman: overviews
Tony Stockman: And we, one of the limitations of the work we did on interactive maps was the fact that you couldn't easily
Tony Stockman: Flip views or interact with the map at different levels. The, the system we implemented and allowed somebody to effectively virtually walk along a route
Tony Stockman: And encounter points of interest along the way, but you couldn't you couldn't kind of zoom out of that view
Tony Stockman: And traverse the map easily at a, at a higher level, at a more rapid rate and gain more of an overview of what was there. And that, to some extent, is what our current work is is looking at
Tony Stockman: And then finally, I'd very much endorse what Brandon said about getting users involved. We've learned an awful lot about things to do and not to do concerning auditory icons.
Tony Stockman: Through prototyping with users. And I think that's got to be a key part of the way forward.
Gobe Hobona (OGC): Right, thanks Tony.
Gobe Hobona (OGC): Nicholas,
Gobe Hobona (OGC): Would you like to respond to the question.
Nicholas Giudice: Sure. So Tony and Brandon, hit a
Nicholas Giudice: Bunch of the points that I would make. But I guess what I would add here is that there needs to be especially for these non visual
Nicholas Giudice: Variants of maps and most people haven't,carttographers and map producers and designers haven't thought about auditory and vibro tactile,
Nicholas Giudice: And so we need to somehow bake into any type of standard more empirically derived, guidelines for
Nicholas Giudice: You know, for design that are that really maximize or ensure that these elements are both perceptually salient and cognitively meaningful.
Nicholas Giudice: And so, you know, we have lots and lots of information out there about things in map elements in line thicknesses and you know. I mean, this is there's a huge array of
Nicholas Giudice: Visual parameters that are just guidelines for design, but that make a big difference in how the map is perceived and learned and remembered
Nicholas Giudice: None of that exists for these non visual maps and the parameters are different. You can't take a visual parameter and have it work for touch, cause touch has
Nicholas Giudice: 500 times less sensory bandwidth. So we have to figure out, you know, we have to do kind of the bit, low level psycho physical work to figure out what are the
Nicholas Giudice: You know what, what is a line that is most perceivable, what are the thresholds, when is it maximally readable, things like that. I think
Nicholas Giudice: To do that work which has started and people including some of the folks here, we started doing this but for a standard to
Nicholas Giudice: Be meaningful to actually work. We have to be able to have this really well clarified.
Nicholas Giudice: And also, Tony and Brandon both hit on this at some point, we need to think more about what information, but I think it was the information modality mapping. So when we're developing a map or any
Nicholas Giudice: Non visual multimodal graphic, what part, what properties, what attributes should be rendered by which modalities. So it makes sense to give labels and distance and as Tony talked about direction and things like that, through
Nicholas Giudice: auditory cues. But perhaps the actual geometric layout information might be through space, but that mapping isn't clearly specified. I think that will be really important for success with these types of maps.
Gobe Hobona (OGC): Okay, thank you. Nicholas. Okay, so, um,
Gobe Hobona (OGC): I mean, I'm from a standards development organization. And one of the first things that we do within that community is we research particular area of interest and try to understand
Gobe Hobona (OGC): You know what the issues, requirements and needs around a particular topic. So my question. My next question to you is, you know, can you please suggest ideas for experimentation that could improve
Gobe Hobona (OGC): On this issue surrounding, you know, auditory maps and accessibility of web maps in general. Help us to prioritize our thinking in terms of where we should focus our innovation work. So shall we start with Nicholas?
Nicholas Giudice: Sorry, I kind of already gave my answer. I guess in the last, the last
Nicholas Giudice: answer
Nicholas Giudice: But that yeah i mean the the experimentation that needs to be done and stuff that's been started. And it really hits on those points that that I'm doing, but it should be done with users as Brandon had said.
Nicholas Giudice: And I think it also needs to be done with how to, it's not just getting the data that I talked about. But how to get that information to other
Nicholas Giudice: Map developers and people that are doing web based mapping
Nicholas Giudice: To learn how this type of thing can be implemented. They're obviously not going to be experts in using vibration. But if we do this right, and get very clear parameters and guidelines. These are things that can be used in generic
Nicholas Giudice: Application of maps. And I think that that's, it's as much doing the experiments as getting the data and the results out to other people that can use it that's important. And I think that that link is currently not always the case. I think that's, that's an important part going forward.
Gobe Hobona (OGC): Okay. All right, Nicholas.
Gobe Hobona (OGC): Tony, do you uwant to respond to that question.
Tony Stockman: Yeah, sure. Um, it really comes back to what we were saying, I think about involving users and finding out what works
Tony Stockman: In, in our case, in audio. So I talked about presenting some information in speech and other information
Tony Stockman: being presented in non speech sound. And quite often, this can be done simultaneously and somebody can understand both streams of information, but it needs to be more work done on this to know
Tony Stockman: What kinds of combinations of sounds presented simultaneously or very close to one another in time, work well and what can people take in at any one time.
Tony Stockman: And particularly, picking up the point I made about overviews or the ability to to switch between different levels of map navigation.
Tony Stockman: As you as you zoom further out or get a higher level perspective on on the map, it may become necessary to present more information within a relatively short time period. So knowing what it is that that people actually want to know from a map overview.
Tony Stockman: How best to make that that overview configurable so that people can actually specify what what elements they want to give emphasis to, and and the means by which emphasis should be given to
Tony Stockman: Certain elements, so it's, it's really effectively user investigations and exploring these, these different options and what combinations work well.
Gobe Hobona (OGC): Thanks, Tony, we're nearly out of time. So Brandon
Gobe Hobona (OGC): You're up next, would you like to respond to that question.
Brandon Biggs: Yeah, sure. So I think really the biggest thing that needs to happen is we just need to make more maps in all different types of modalities, I think.
Brandon Biggs: I think right now the biggest
Brandon Biggs: Problem is that we've we haven't really tried doing BART maps or topological maps of the Grand Canyon, or, you know, different, different types of maps that are required for being a professional, you know, a geographer, and
Brandon Biggs: You know, doing all different types of maps that that that are out there. So I think that's really the next step, we just need to make a bunch of different maps. And I think that's what we're trying to do with the Audiom platform that
Brandon Biggs: I'm working on. And I know you know all of us here, Tony, Nick and I are really trying to make different types of maps. So I think that's, that's, you know, we're kind of doing that, but it it it just it's taking a little bit of time so
Brandon Biggs: Yeah, I think that's what needs to happen.
Gobe Hobona (OGC): Okay, right.
Gobe Hobona (OGC): Okay, Doug, would you like to respond to that question.
Doug Schepers: Really quickly, I want to reiterate what everybody else has said, which is we need user testing, for example.
Doug Schepers: I know limitations of standards. Oftentimes there's not a lot of user research and real world data that goes into it. If
Doug Schepers: They try, everyone tries, it's good faith. But, for example, the color contrast ratio enshrined in WCAG
Doug Schepers: Forever is a little bit arbitrary. It's not, it's not really hard data. And I know this has been improved in WCAG 3
Doug Schepers: But I really think that we, there needs to be considered user research like like these other folks have said, and I also to reinforce what Brandon said,
Doug Schepers: Different tasks, different kinds of maps have different tasks associated with them different goals, different things you want to do with them and we need different modalities to
Doug Schepers: To deal with that. And so we need to identify those tasks.
Doug Schepers: That each, each type of map which tasks you can accomplish with each type of map and maybe work out how best to accomplish those things. But again,
Doug Schepers: Experimentation as everyone said and user research and and incremental changes to some of the existing web standards that we have. And also in pushing out making sure people understand that they need to make the data available so that these modalities can be enabled through their maps.
Gobe Hobona (OGC): Okay. Thanks Dough. Yeah. So that's all that we
Gobe Hobona (OGC): have in store for the for the panel. So I'd like to thank our panelists for
Gobe Hobona (OGC): For this session, Doug Schepers from Fizz Studio, Brandon Biggs from Audiom and Smith-Kettlewell Eye Research Institute, Dr. Tony Stockman from Queen Mary University and Dr. Nicholas Giudice from the University of Maine. Thank you for sharing your insight and
Gobe Hobona (OGC): And discussing with us on this very important topic.
Gobe Hobona (OGC): Ryan,
Gobe Hobona (OGC): I think it's back to you.
Ryan Ahola: Great. Thanks, everyone. Thanks Gobe for moderating that excellent question. Um, so next we're moving into a set of presentations on related to advances in 3D map display.
Ryan Ahola: So first off we're going to have you on Jan-Erik Vinje who is the Managing Director of the Open AR Cloud Association.
Ryan Ahola: And also a full stack developer at Norkart, he's also a co Chair of the OGC's GeoPose Standards Working Group.
Ryan Ahola: And he's going to be giving a presentation titled 'from points of interest to maps of objects' and I have Jan-Erik's ' presentation here so I'll just share my screen. And we'll start running through it.
Ryan Ahola: Should be able to see it now and
Jan-Erik Vinje: There you go.
Ryan Ahola: Okay. So take it away.
Jan-Erik Vinje: Thank you so much for the opportunity to speak here and go to next slide actually
Jan-Erik Vinje: Are we seeing, do you see all okay it's just cutting off a little bit in my zoom. That's fine. And yeah, I'm the managing director of Open AR Cloud and we we are
Jan-Erik Vinje: Working on driving the development of open interoperable technology data standards to connect the physical and digital worlds for the benefit of all. And we've got a couple of amongst our members, we have a couple of dozens who are
Jan-Erik Vinje: Contributing as volunteers on a fairly regular basis. And we have a semi growing number of partner organization for from the spatial computing industry. Next slide.
Jan-Erik Vinje: So straight to the topic, we're now going to look at
Jan-Erik Vinje: How we can go from points of interest in maps to interesting objects in maps
Jan-Erik Vinje: To look into that, we might want to do a comparison of the two
Jan-Erik Vinje: So next slide.
Jan-Erik Vinje: Points of interest are very familiar to most people who have worked with maps, typically they refer to something by their position and that position
Jan-Erik Vinje: in most cases is a 2D geographical position, would be latitude and longitude or some other geographical coordinates, they can represent very different things. So typically could be like yeah
It could be referring to a city, and then this one single point that refers to that city. So it could be a large area.
Jan-Erik Vinje: But it could also be a stationary object like, like a building or statue or something of that sort. And then you could also have dynamic points that where you have for instance vehicles or people who are moving around.
Jan-Erik Vinje: Points tends to have a few common attributes you have categories, you ID them, you name them and very typically you find some generic symbols. So the
Jan-Erik Vinje: The category tends to inform what kind of generic abstract symbol used to represent the points of interest. And then there could be any sort of metadata. Electric vehicles, they are very interesting case of POI data.
Jan-Erik Vinje: Could be very rich, you could have images of the charging station and you could have you, I'm not talking about electric vehicle but electrical vehicle chargers, you could have images, you could have the status of different chargers if they are available or not.
Jan-Erik Vinje: Any kind of things. So, so the the world of PoIs are quite large and complex and interesting
Jan-Erik Vinje: And objects in maps share many of those features and but you you need to go to the third dimension. When you're talking about an object. And if you want to have them in the map, you need the real word geospatial pose, or you need a 3D geospatial position and orientation.
Jan-Erik Vinje: Which allows for a six degrees of freedom positioning of the object.
Jan-Erik Vinje: Also something that is different from some some types of PoIs objects, they have a limited 3D volume. They have bounds and most often, you would say that they have unique 3D geometry per object. Not always, but that would be quite common.
Jan-Erik Vinje: And
Jan-Erik Vinje: You, you can then allow to visualize the concrete and specific, that makes them more
Jan-Erik Vinje: Direct and less abstract and generic
Jan-Erik Vinje: You used to represent any real or virtual objects with a 3D volume and and a real world pose.
Jan-Erik Vinje: So that way when they can represent the subset of PoIs and but do that in a way that is more spatially accurate and also they enable new ways to use maps, like immersive maps.
Jan-Erik Vinje: And and a little note here, that they as they share so many of the attributes of points of interest, if there were a a standard for PoIs and you could easily extend it to define objects in Maps. Next slide.
Jan-Erik Vinje: So yeah, this is a little bit of the lamenting because there are no standards. OGC tried,
Jan-Erik Vinje: Brave effort. And what you see is that companies are Here and Google Maps and Norkart, my company, we handle PoIs in different ways. We try to assemble different sources of PoIs from from a range of sources. Maybe you write wrapper. Maybe we write our own idiosyncratic PoIs services.
Jan-Erik Vinje: Maybe the closest thing out there in the real world could be the points
Jan-Erik Vinje: Feature of GeoJSON, it has coordinates. Here in this slide you see there's a 2D coordinate, it even supports 3D coordinates. And then you have properties where you could
Jan-Erik Vinje: put in all the metadata of of the point, set up the categories or ideas that kind of thing. But that's not really, it doesn't really cut it so
Jan-Erik Vinje: Okay. So next slide.
Jan-Erik Vinje: For talking about objects. I want to take you back to the time of paper maps. Next slide.
Jan-Erik Vinje: Next slide. Yeah, so this very beautiful map from Paris. Oh, previous,
Jan-Erik Vinje: One back, very beautiful map from Paris. You see, actually. You see the buildings and trees and different things as three dimensional objects in the map. It's a very rich way to visualize place.
Jan-Erik Vinje: And we see that if you go to the next slide.
Jan-Erik Vinje: Here you see this, this kind of approach is still being used.
Jan-Erik Vinje: Theme Park maps tend to have this way of representing. It's a very intuitive way to understand
Jan-Erik Vinje: And navigate and find what you're looking for. Interestingly here there's a combination of objects that are specific and idosyncratic and then you have the generic PoIs you have those numbers and letters that are more abstract
Jan-Erik Vinje: And the next slide.
Jan-Erik Vinje: So now we want to return to digital maps and next slide.
Jan-Erik Vinje: And there's something going on right around now, it's been going on for some time, but map data is becoming more 3D and
Jan-Erik Vinje: By going in that direction, they can start to break out into the 3D world out from the 2D screens.
Jan-Erik Vinje: So if you're looking at the starting point, we have PoIs on 2D maps. Leaflet is a library for web maps that displays flat 2D raster tiles. You could put PoIs on top of that, then they are
Jan-Erik Vinje: In the realm of 2D coordinates. And then there's an evolution towards vector maps, for instance MapBox they have vector tiles and you could also
Jan-Erik Vinje: Tilt those maps that are essentially flat, but by tilting the map, you get this intuition of, of the 3D aspects of distance, of things further apart comes closer together.
Jan-Erik Vinje: It's a, it's a great thing, but
Jan-Erik Vinje: The three aspect is limited to a flat plane where you might do some extrusions and you
Jan-Erik Vinje: Know, it's so you have a ritual way to interact with the especially to the flat map.
Jan-Erik Vinje: More interesting though are the globe maps where
Jan-Erik Vinje: Where the real world is recreated three dimensionally on the screen.
Jan-Erik Vinje: And where you can place objects in those maps with geospatial six degrees of freedom, position and orientation.
Jan-Erik Vinje: And what what we call GeoPose in OGC and also Open AR Cloud.
Jan-Erik Vinje: And that's all great. And there's this is real world spatial computing three dimensional, but in Open AR Cloud and in the industry and spatial computing industry
Jan-Erik Vinje: We want to bring those the map data out in the real world to create the immersive maps, maps that we are in the middle of, and we interact in in the way we interact with real world.
Jan-Erik Vinje: So Open AR Cloud is is engaging this technology of real world spatial computing, which is more or less an infrastructure that it will be based on real time updated 3D maps that are in the cloud. Next slide.
Jan-Erik Vinje: So I mentioned GeoPose geospatial position and orientation and working with Christine who will be speaking after me.
Jan-Erik Vinje: On on the OGC standards working group to define a universal standard for geospatial position and orientation. We've been working intensely since January 24 and hopefully we'll be able to publish a draft specification that can be reviewed by the Community.
Jan-Erik Vinje: Next slide.
Jan-Erik Vinje: Open AR Clouds. Currently we are
Jan-Erik Vinje: Hoping to use this standard in our big project now. We are working on something we call the open spatial computing platform that
Jan-Erik Vinje: We hope can enable what you call an open spatial web and also providing interoperability layer for real world spatial computing in general.
Jan-Erik Vinje: And it's, it's the most fundamental piece of that platform would be GeoPose, and then there's the machine readable world and and discovering all the types of content and services related to, to, to the space around you and. Next slide.
Jan-Erik Vinje: So when we're talking about the spatial web. It is very much a map concept, where you, those who have worked with maps, all the time, they can
Jan-Erik Vinje: Recognize the bottom part as base layers. The base layers in this concept are tied to the real physical world. It is the terrain. It is the building, buildings.
Jan-Erik Vinje: But in addition to what is typically the the base layer. There's another aspect of reality that becomes part of this base layer. And now that is the dynamic part of the stuff that is moving around. That is why
Jan-Erik Vinje: This kind of map is not handmade. It is machine made, it is based on machine perception and reality capture using sensors.
Jan-Erik Vinje: It doesn't exclude some manual intervention, but the real time layer would not work without reality capture and machine perception.
Jan-Erik Vinje: On top of that, there are thematic layers that would be the kind of layers that might be transparent that contains extra information that you overlay on your base map.
Jan-Erik Vinje: But now the base map is three dimensional. And you're inside of it. And the overlay is all around you so, so that is sort of the main difference, you live inside this map. So next, next slide.
Jan-Erik Vinje: This talk is about object, oh you jump to two slides. So the first. First I want to mention that on the bottom, bottom here.
Jan-Erik Vinje: When you want to represent things in the real world. You want to represent objects and you can then combine set of 3D geometric data with the GeoPose that can be used to to have your both static and dynamic reality layers.
Jan-Erik Vinje: Those are the real things in the things that are made of atoms that you represent, the base layer, but you have other types of objects that aren't always part of the real so you can go to next slide.
Jan-Erik Vinje: Those can be anything that could be could be virtual or it could be something from the past, something from the future something, an artwork, at your location. It can be something abstract that convey some information about the place and
Jan-Erik Vinje: So you will be able to benefit from using objects in map throughout this, all the layers of the spatial web. Next slide.
Jan-Erik Vinje: So bringing this back to W3C and web standards. I already lamented on the lack of a standard for points of interest. So
Jan-Erik Vinje: Somebody should pick up that so maybe W3C and OGC can get back together and figure that out. And if you go to next slide, I have my thoughts on objects in map.
Jan-Erik Vinje: browser support for objects in map should use the OGC GeoPose standard and also support automatic transform from geospatial to cartesian.
Jan-Erik Vinje: The currently the, the proposal we're working on in our standards working group
Jan-Erik Vinje: Had has something called the base in GeoPose posts which is in latitude, longitude and that doesn't really work too well in
Jan-Erik Vinje: In screen coordinates or in in 3D, in a 3D scene that is rendered using 3D graphics. So there is always this need to make the transform from the GeoPose and over to Cesium.
Jan-Erik Vinje: And so this kind of support to be part of
Jan-Erik Vinje: The kind of native Maps API and in the native map elements that is being discussed there, especially to modalities where where objects in map becomes especially useful so
Jan-Erik Vinje: That would be if we could make sure that the map elements supported the globe modes like you see with Cesium or Google Earth that kind of
Jan-Erik Vinje: Engine
Jan-Erik Vinje: But also, and maybe even more so when you go for the immersive mode where we can pull the map up to one to one scale and use from virtual reality or augmented reality.
Jan-Erik Vinje: And in particular, the that's where the last point here comes in. When you're in, for instance, a web XR immersive mode and
Jan-Erik Vinje: If you use just GPS and compass from your device you you might fall bit short on the accuracy.
Jan-Erik Vinje: So AR cloud technology tends to leverage an AR cloud map representation of the world for positioning
Jan-Erik Vinje: And Open AR Cloud is working on developing a universal protocol where clients can speak to AR cloud positioning services, what we call GeoPose services to obtain the device poe.
Jan-Erik Vinje: And then you might end up with centimeter or a few centimeter accuracy at the current level of technology.
Jan-Erik Vinje: And if you have your device pose, you should be able to display object in Maps at there real world locations and that is really from maybe what some
Jan-Erik Vinje: Some I've heard
Jan-Erik Vinje: There's this expression in the AR cloud about painting the world with data.
Jan-Erik Vinje: And one of those data could be those objects. So that is
A hope for the future.
Jan-Erik Vinje: Thank you. I think now over to Christine and you will be able to learn a lot more about GeoPose
Ryan Ahola: Great. Thanks, thanks Jan-Erik for that, for that excellent presentation. It's interesting to hear, see some of the requirements and needs for this transition to
Ryan Ahola: More 3D space and augmented reality for the Geospatial web community, so that's that's excellent.
Ryan Ahola: Yes. And as, as Jan-Erik mentioned. So now we're moving on to a presentation, our next presentation in the session.
Ryan Ahola: Which will be given by Christine Perey and also Josh Lieberman. So Christine is
Ryan Ahola: A consultant for Pery research and consulting. She's also a member of the Open AR Cloud Association and with Jan-Erik is co Chair of the
Ryan Ahola: OGC Geopose Standards Working Group, Josh Lieberman is a director of the innovation, one of the directors of the innovation program at the OGC, so feel free to go ahead. I don't have your presentation. So I'm not sure if one of you wants to share your screen
Christine Perey: You don't? Oh we provide it to Peter and he
Joshua Lieberman: Oh I can
Christine Perey: Okay, Josh. Wow, what a beautiful background
Joshua Lieberman: Something like that.
Christine Perey: Perfect, actually while Josh is doing this, I want to thank everybody. And, and thanks Jan-Erik for setting the stage.
Christine Perey: For this, what Josh and I wanted to do is actually respond to a request that Peter made of us months ago, I think it was a request on behalf of the Committee running this
Christine Perey: This workshop
Christine Perey: That
Christine Perey: We share with you how GeoPose supports the new Web Map, what use cases and requirements and Josh can you go into full screen mode or is that
Christine Perey: Is that what you see the
Joshua Lieberman: Very large screen. Hopefully, it still works.
Christine Perey: Yeah, it's just fine just fine. So what I did as a first step is, I went into the document that was provided by Peter, which I'm sure many of you are familiar with, describing the use cases and requirements and I kind of I pulled all of those
Christine Perey: Those out, the use cases and requirements. The functional requirements and then made them into tables, and that's going to guide us a little bit, or at least that's the
Christine Perey: The origin of the structure of this this presentation. But before I do that, I think you could go to the next slide.
Christine Perey: We'll be focusing primarily on the alignment between GeoPose and and Web Map, but before that give a few minutes on GeoPose and then hopefully we'll have a little time and we're going to transition into a discussion after this for
Christine Perey: more use cases and maybe showing, talking about where things go next. One of the things I want to emphasize here is that
Christine Perey: GeoPose and the concepts that we're going to be discussing are 3D at their core, just like objects imply and, and really have a three dimensions in Jan-Erik's presentation.
Christine Perey: That's the case here, as well. The second big concept is the way we've broken these out as we're looking at three actors or audiences as they were defined, defined by the Web Map
Christine Perey: Activity. So we're looking at the content creators roles and their needs, the visitors to websites and maps and then the developers of applications that combine content and maps and deliver services to users and go on to the next slide.
Christine Perey: So you've, you've heard quite a bit already but it's important for us to, to reiterate what is GeoPose
Christine Perey: It is in Cartesian coordinate
Christine Perey: Or your coordinates, we've actually advanced this and going to provide that as an option. And it's anchored on the surface of the real world. There are a couple of different concepts that are allow us to extend from the simplest
Christine Perey: Frame, to nested or advanced frames, we include some concepts around time and sequences of frames, and you'll see it's a quite a rich
Christine Perey: conceptual model. But again, as I come back to the core of this really is about placing objects, whether they're real or virtual or symbols into projections that are oriented at six degrees of freedom for for accuracy and richness. Next slide, please.
Joshua Lieberman: Just wanted a quick comment here because there's this thing about frames here that may be confusing. And of course,
Joshua Lieberman: The basic part of this is orienting an object, but the way that's done is by fixing a frame of reference and as Christine said a Cartesian or possibly Eulerian
Joshua Lieberman: Frame of reference, but you're fixing this frame of reference to something. For GeoPose, one of the two frames that is providing that position and orientation is fixed to the earth.
Joshua Lieberman: It can be fixed to another celestial body, right, for Seleno
Joshua Lieberman: Pose or so on. But for now, we're talking about the Earth. But the other thing though is those frames can be fixed for different purposes, so
Joshua Lieberman: The base purposes. There's some object, virtual or physical and the frame is attached to that object. So the object's
Joshua Lieberman: position and orientation can be determined by the GeoPose, but the frame can also mean a couple of other things. And this is relevant to the map, Web Map cases, it can be relative to an observer's ability to see so it can essentially represent the user viewpoint.
Joshua Lieberman: That where they're looking from what they're looking at in a 3D or six degrees of freedom perspective.
Joshua Lieberman: Or it's on the
Christine Perey: Next slide shows that, has the figure of showing up.
Joshua Lieberman: Yeah, yeah. So or, so it can be the viewpoint, looking at or the perspective looking from, so let's go on to that.
Christine Perey: Exactly.
Christine Perey: Perfect.
Christine Perey: And so we're certainly happy to come back to this figure.
Christine Perey: If needed, to show that again. The user is in this case, in this particular illustration
Christine Perey: The beneficiary. But I also want to point out that there are a lot of machine to machine use cases for GeoPose, which are not exactly in in scope for this particular workshop, but the use cases are diverse in time and space and devices. So let's go to the next slide.
Christine Perey: As I said, we very much followed the framework that was provided in terms of three categories. And in our case, you know, the assets being published into some sort of a server where the website visitors can get them, but also the application developers can use those to enhance their
Christine Perey: Runtime applications can go to the next slide.
Christine Perey: I think Josh, you're
Christine Perey: Our thought was, perhaps we would you could explain a little bit more what we have on the right hand side of this table.
Christine Perey: In general, you can see though, that GeoPose isn't necessarily relevant for all of the use cases that were outlined.
Christine Perey: But some.
Joshua Lieberman: Yeah, I mean this. The idea here was to take the use cases that have been defined and presented for these different roles dealing with web maps.
Joshua Lieberman: In this case of content, an author of content to be displayed in a Web Map and look at where GeoPose might play a role. And so here there is a kind of an idea for a role for specific use cases. For example, displaying a map centered on a point location.
Joshua Lieberman: So that you can have a location, an orientation and a scale of that map.
Joshua Lieberman: And you want that to be most appropriate to particular user. And that's generally the users viewpoint. And as we mentioned before, one of the
Joshua Lieberman: Roles that GeoPose can play is to express a user's viewpoint. In fact, a sequence of GeoPoses can indicate a user's progress. For example, taking a tour, or changing their position and being able to display a map that has content for that viewpoint.
Joshua Lieberman: That you know can extend, for example, to displaying a route, so not only saying okay you know we have some sort of assumption that indicators are aligned according to the route, but we can actually have objects relevant to the route that are aligned in the map as someone would see them.
Joshua Lieberman: And as far as displaying custom web content that's the same thing. So being able to symbolize or mark objects, you know, they may be
Joshua Lieberman: Points of interest, but we can exactly use the GeoPose to show this is the orientation in which you would see that building or that other feature.
Joshua Lieberman: As you encountered it along the route, or from a particular perspective on the route and that extends then into creating layers by assembling those objects, which may be coming from
Joshua Lieberman: Many different places around the world, but using GeoPose, they can be placed relative to each other and within the extent of a map layer.
Joshua Lieberman: Extending that further in time, then, the use cases of providing animated spatial data, the animation, for example, the position and orientation.
Joshua Lieberman: Can come from a sequence or stream of GeoPose objects that position that, you know, although within a 2D map layer the 3D position and orientation can be shown.
Joshua Lieberman: So there are particular use cases here where it seems there's a useful role for GeoPose. And we sort of sum up that in call it the
Joshua Lieberman: Content, you know, the GeoPose assisted creator paradigm. So there's positioning content in a map using GeoPose. There's positioning a map, according to a user viewpoint.
Joshua Lieberman: Or assembling a map layer from diverse features, each of which has their own position and orientation and then being able to have that change over time by the injection of updated GeoPoses for those objects.
Joshua Lieberman: So the second
Joshua Lieberman: Stakeholder actor is the visitor, the user, the viewer of this sort of Web Map and we have again this ability to have one's viewpoint tracked
Joshua Lieberman: You know whether that goes to a server or just remains in the browser, which then you know provides instructions, I want to see this map plane orientation.
Joshua Lieberman: And the ability for example, for a user to say, there's here's a point with, you know, one of those little carets on it. I wonder what it really
Joshua Lieberman: Looks like, or what's more information. So being able to pull more information, which includes, for example, a
Joshua Lieberman: A batch model of the gas station and its orientation, you know, how you would see it.
Joshua Lieberman: As you came up to it, seeing that perspective and the map, but we, yeah and so
Joshua Lieberman: Another way that this provides information on a map is to provide 2D or 3D filtering, even within a
Joshua Lieberman: 2D map layer to be able to say, for example, searching or sorting within a set of features in a map layer, you know, what's at ground level or what's higher up in the air or
Joshua Lieberman: What things can I see, what things can I not see because they're hidden somewhere else. And so even when a 2D map the GeoPose information. You know what signs are facing me versus I'm not going to be able to see them.
Joshua Lieberman: Can be derived from the GeoPose information.
Joshua Lieberman: So there's another bit about here.
Joshua Lieberman: Comes under the category of what really is a map. And so we think of a map and things like augmented and virtual reality as being very separate
Joshua Lieberman: But there's another perspective, in which, you know, a map is really a
Joshua Lieberman: Let's say a basic aid to visualization and so we don't really talk about this as imagination or creativity. We talk about this as
Joshua Lieberman: 'Well, To use a map, you need map skills but math skills are, how do you, in your mind, take symbols from this flat piece of paper and inserted into your picture of the landscape that's presented before you in order to find something to navigate something to
Joshua Lieberman: Undertake some other tasks or learning with regard to that landscape.' And so there really is a
Joshua Lieberman: Continuum is my assertion between a flat map oriented north with a fixed extent and scale and something which becomes more
Joshua Lieberman: Adaptable, more responsive to that user's interests viewpoints, perspective and progress, you know, so we're looking at the sort of other end of the spectrum of, you know, so far of augmented reality where you really are looking at
Joshua Lieberman: What's around you and in front of you and having those map objects placed into that view. But having a Web Map that is portable and is able to
Joshua Lieberman: Sort of transition from that.
Joshua Lieberman: Augmented reality to mapped reality to mapped formalism of reality, is a way that can really be put into this spectrum of use, and GeoPose so that 3D
Joshua Lieberman: Position orientation information is an important part of establishing that continuum.
Joshua Lieberman: Okay, yeah. So orient and scale a map to my viewpoint, retrieve and view detailed objects for points of interest, filter content by the positioning and orientation and then
Joshua Lieberman: As well, having this standard bit of data about where that object should be, enables, facilitates the offline storage of those objects. And in fact, enables, for example, them to be updated with small amounts of
Joshua Lieberman: Positioning information to minimize the bandwidth for example, of updating that content.
Joshua Lieberman: So the third one is the application developer, you know what
Joshua Lieberman: Capabilities can a developer
Joshua Lieberman: Create using the assistance of the GeoPose
Joshua Lieberman: Definition of position and orientation and the ability to exchange, interchange and manipulate that. So here for example
Joshua Lieberman: Ability to provide feedback to users as they manipulate the map, so ability to show where that map plane and the map viewpoint is positioned for example, in the user viewpoint, so that you know where am I looking at in the larger context, but in 3D.
Joshua Lieberman: Moving the map to a new position or zoom level again going from the flat map which we've we've kind of reduced to to say, okay, that's a map, but
Joshua Lieberman: What many people have shown us is that there has been for centuries, this desire to create maps that are in user perspective, you know, the map of Paris, and so on. It's just that
Joshua Lieberman: Up until fairly recently, those maps have been quite difficult to compute. So we've we've taken the, you know, sort of GIS flat map as the, 'That's what a map is' when there's been a much more flexible a broader perspective of what a map is
Joshua Lieberman: Through time. It's just required
Joshua Lieberman: Really clever capabilities to do that.
Joshua Lieberman: And then looking at being able to
Joshua Lieberman: generate new vector features so
Joshua Lieberman: Having GeoPose as an interchange format, enables you to take data, use that positioning from, you know, myriad sources and create new vector features.
Joshua Lieberman: And in fact, update those features, for example, by being able to drag and drop a GeoPose object, you know, this is where it is now. And, you know, have the map update according to input of these new GeoPoses or GeoPose sequences.
Joshua Lieberman: So leveraging the user GeoPose using GeoPose sequences for animation generating new features from data, including GeoPoses or even GeoPoses
Joshua Lieberman: Linked to features already there. And, you know, here's an idea. You can drag and drop those GeoPoses to animate map features, say, you know, 'okay.
Joshua Lieberman: Here's what it was. Then here's where it is now here's where it's going to be later.' So there are, this is just the start of new ways that you could use GeoPose to extend the Web Map paradigm
Joshua Lieberman: You know, in some ways that are new, in some ways that are just recovering what cartography used to be able to do, but required you know that cartographer to sit in your phone and do that.
Joshua Lieberman: Okay, do you want to talk about the these functional requirements, Christine.
Christine Perey: You know, I, in the interest of time, and we've got a panel coming up.
Christine Perey: I think we should maybe move right to the third one, because that's where the greatest value lies.
Christine Perey: So,
Joshua Lieberman: These are our particular capabilities that we have been derived from those use cases.
Joshua Lieberman: And similarly, we've gone through and said, well, there may be some interesting ways to use GeoPose. And I'll just note, you know, being able to use images, not just as
Joshua Lieberman: georectify the images but actually show the swath or path of image acquisition is a possibility, and then many of these are similar to the ones that we've
Joshua Lieberman: inferred from the use cases themselves.
Christine Perey: Exactly.
Christine Perey: Yeah, I think, I think this is really brings it home. I mean, since the requirements were driven by the use cases.
Christine Perey: They matched, and you did a great job of explaining those use cases. So we think that there's a lot of potential for GeoPose in the web maps and the implementations, not only the ones that the use cases & requirements that were envisioned but that people will certainly be using this capability,
Christine Perey: invent new ways of using maps. And that with the GeoPose, they'll be able to not only position orient objects, but specifically with respect to a user and also be able to see where and how
Christine Perey: A user sees things from the user point of view, when you're designing applications and so forth. The
Christine Perey: The remarks that we made were organized really to keep parallel with the use cases and requirements document.
Christine Perey: And I think that's shown here. So content authors will be able to enrich what they have to offer using GeoPoses.
Christine Perey: Visitors will not only have a different and additional experiences of their existing maps and augmented reality and maybe being able to walk in a map,
Christine Perey: Walk through space in a mapped way. And finally, there's a lot of applications and potential for developers to not only use simple GeoPose, but our sequences and animations and
Christine Perey: really enrich the the use cases. So again, as Josh also mentioned, the idea of dragging and dropping some GeoPose data
Christine Perey: Into view and how is that going to enrich the experience and make it more, more meaningful and perhaps give us new, new ways of thinking and looking at the world. I believe that's all we have.
Christine Perey: Thank you Josh.
Christine Perey: Thank you, Ryan.
Christine Perey: Great.
Ryan Ahola: Excellent. Thank you. Thank you, Christine and Josh for that for that excellent presentation. Um, so as Christine mentioned, we're, we're about to start a panel discussion where we'll get into more of this 3D content and also a little bit on augmented reality.
Ryan Ahola: So we can, I guess we can start that now. So I believe that all of the panel members are here.
Ryan Ahola: So maybe I'm just going to introduce everyone, so maybe as I as I go through the introductions. You can just show your video and wave everyone or something.
Ryan Ahola: So I guess just to get started. So first off, so the panel consists of a few different individuals we have Christine and Jan-Erik who everyone's just just met through their
Ryan Ahola: Current, their presentations that we just had. We're also fortunate enough to have Ada Rose Canon, who is the developer advocate and staff engineer at Samsung and is also a co Chair of the W3C immersive web working group.
Ryan Ahola: Patrick Cozzi, who is the CEO of Cesium is here as is Thomas Logan, who is the owner of Equal Entry. So I'd like to thank everyone, everyone who has made themselves available today to participate in this panel to grow their, their expertise on on augmented reality.
Ryan Ahola: So maybe what I can start with, if I was interested if if any of the panelists
Ryan Ahola: Wanted to provide any any opening remarks before I get into get into questions if there's anything that you wanted to say about what you do and your expertise on augmented reality, that would certainly be welcome. Or we can just transition into questions. It's really up to you.
Thomas Logan (Equal Entry): I love to just say a quick thing about kind of my areas, area of interest for augmented reality.
Thomas Logan (Equal Entry): So I'm Thomas Logan coming to you from Tokyo, Japan tonight. It's why it's very dark here right now but I own a company focused on
Thomas Logan (Equal Entry): Just improving technology for people disabilities. And I guess my
Thomas Logan (Equal Entry): specific area of interest, I went pretty deep on. It's not that broad, but it's a big problem for people with disabilities is the accessibility of crossing
Thomas Logan (Equal Entry): At busy traffic intersections, out in the real world. So I wanted to bring into at least the discussion
Thomas Logan (Equal Entry): Even with augmented reality, a lot of times there's thoughts of the visualization only part but the sonic sound
Thomas Logan (Equal Entry): Components that can be put into augmented reality are very interesting for people who are blind and low vision when thinking about knowing whether or not it's safe to cross intersection. So I had done a project
Thomas Logan (Equal Entry): With New York City, where have 40,000 intersections, they have in New York, only about 550 of those have that beeping tone locator to tell you when it's safe to walk or not and
Thomas Logan (Equal Entry): I just bring that in, is like something I think, interesting to have as a user scenario we're talking about use cases in the previous one. But I think, thinking also about the audio audio components of how this map data can be interpreted is very exciting.
Thomas Logan (Equal Entry): Nice to meet you.
Christine Perey: Thanks, Thomas, I wanted to bounce on that I think we often overlook
Christine Perey: Audio augmented reality and it is a field, but we can come back to that.
Ryan Ahola: Would anyone else like to mention that as an introduction or we can go into question for, for if everyone's ready
Patrick Cozzi, Cesium: Yeah, right. If I could just chime in quickly. So one is just, yeah. Thank you for inviting me to the event and the others panel. So, quick, quick intro for myself. So Patrick Cozzi, the CEO of Cesium
Patrick Cozzi, Cesium: I do a lot of work in the open standards world I chair or co-chair this 3D formats group at Khronos creators up to the glTF format.
Patrick Cozzi, Cesium: I'm really excited for augmented reality. I think that it's going to be a huge paradigm shift to how we experience computing right thinking about
Patrick Cozzi, Cesium: The screens on our phones and our laptops of today and what that means tomorrow and in 10 years from now.
Patrick Cozzi, Cesium: And I'm really excited about the intersection of maps and AR as kind of some of the main the main killer use cases and bringing temporal 3D to this, so looking forward to chatting with everyone today.
Ryan Ahola: Great. Thanks, thanks, Patrick.
Ryan Ahola: So maybe, maybe we can get into questions if Jan-Erik and Ada don't don't want to say anything at this point but
Ryan Ahola: And I would like to just encourage our audience who's listening to feel free to ask questions in the Gitter
Ryan Ahola: Chat. I do have a few questions to pose to the panel. But if there's something specific you want to bring up. Please. Please go ahead. It's you're the ones we're trying to speak to some of your questions are
Ryan Ahola: Are certainly important.
Ryan Ahola: So maybe the first question that I was thinking of asking and and I think part of this might have partially been answered by the previous presentations
Ryan Ahola: But I think there might be opportunities for some other opinions on this. I was thinking that I guess 3D and augmented reality really represents
Ryan Ahola: A fundamental shift in how the Geospatial web communities think about displaying information. I know, especially for for the geospatial community. I know myself it's
Ryan Ahola: This community is still fairly focused on two dimensional representations of content and analysis. That's how that's how the community
Ryan Ahola: Traditionally work. And that's where it's expertise is, so I was kind of wondering what you think these communities need to consider when there's thinking about standards for developing Augmented Reality in 3D maps.
Ryan Ahola: And maybe we can start with with Ada.
Ada Rose Cannon: So I think some stuff,
Ada Rose Cannon: So like, speaking of someone who who does run one of these groups for putting standards together for, um,
Ada Rose Cannon: For AR and VR on the web. I'm focusing on AR for this session.
Ada Rose Cannon: The thing which I'd like to focus on is working out the things which
Ada Rose Cannon: Where there is, where there is some almost overlap with features that are required.
Ada Rose Cannon: By finding the bits where we can extend the the the existing Web XR device API to give access to the information you need
Ada Rose Cannon: From the hardware to
Ada Rose Cannon: To display the content because the Web XR API is very low level, it's, it's pretty much raw access to
Ada Rose Cannon: To like the sensor data from the headset and access to display stuff on the headset itself.
Ada Rose Cannon: So finding the bits were getting access to this sensor data to the displays would be really beneficial.
Jan-Erik Vinje: Yes.
Ryan Ahola: Oh, that's, yeah, that's an excellent point. I think it's getting that that's that's your contact to be able to be make it usable for these applications is certainly important. I'm just wondering if anyone else has a follow up to that or or to the original question.
Jan-Erik Vinje: I think, yeah, that makes very much sense. So we're currently deep in the nitty gritty details of of what we want to develop in Open AR Cloud as a protocol for obtaining your GeoPose from a service and that method.
Jan-Erik Vinje: Starts with the sensor data from the device and we need to find ways that we can negotiate with the server, how we can send that sensor data and what sensor data the server needs.
Jan-Erik Vinje: So we want to leverage what else is in the standards community around sensor data, and then be able to communicate that and get access to that data and communicate that data to a
Jan-Erik Vinje: Web server, and then be able to get back and treat it almost like a new type of sensor data that has been refined that is more more accurate than the original sensor data for the purposes of something like pose
Jan-Erik Vinje: So then you have a protocol for a GeoPose server. And that would be an awesome thing to to includes as a protocol for for use in the web context.
Thomas Logan (Equal Entry): I just wanted to add in, like, I guess, echoing on Ryan's point about the a lot of government agencies or governments that would be consuming potentially these protocols.
Thomas Logan (Equal Entry): I think the data is a lot in 2D and my taking it back to my pedestrian signal example as a scenario. One of the things we encountered was
Thomas Logan (Equal Entry): The button that you actually press at a physical intersection. It's not always even at a standard height, like it. It can be sometimes put
Thomas Logan (Equal Entry): Near the knee, it can sometimes be put near over the shoulder, and there was no data about that measured at all for the city. So when we were talking to them. It was first, like, Well, how do we measure that and then what's the system that you would even
Thomas Logan (Equal Entry): store that in so that it could be used in like an AR application or or VR application so that it could synthesize exactly like where that button was located so
Thomas Logan (Equal Entry): Just have that as a user scenario that I think if there was that standard and then there were software that you could type that into
Thomas Logan (Equal Entry): I definitely encountered that as a barrier, which is trying to like brainstorm how to build a cool solution for people with disabilities in AR
Ryan Ahola: Ya know, I was thinking that. So that's an interesting point because I think I've been at least in the geospatial community that's not like, sound is something that's certainly not considered
Ryan Ahola: Because it's just not something that people are familiar with, with working with. So I think broadening the
Ryan Ahola: What our conceptions of what has to be included within geospatial to make things work for everyone and accessible way is certainly important for the community to consider.
Ryan Ahola: So no, that's a very interesting point.
Ryan Ahola: And maybe we can transition to a different question. And we had one in the panel or one of the Gitter chat with asking what the best way to deal with geographic precision is when combining map data with augmented reality.
Ryan Ahola: Also, are there ways to anchor feature coordinates to the to the physical world.
Ryan Ahola: Maybe you can ask Patrick if he has any comments on that.
Patrick Cozzi, Cesium: Yeah, so in just in 3D, even before we have to AR, you know, precision has always been a challenge, right, and
Patrick Cozzi, Cesium: In mapping when you can do anything from kind of gym and all the way out and seeing all the satellites orbiting the globe.
Patrick Cozzi, Cesium: To zooming into a car engine, right. And a lot of the 32 bit based GPUs will have precision errors with with jittering or with z-fighting. So, you know, a lot of the global scale engines will go to great lengths
Patrick Cozzi, Cesium: To emulate higher precision and and you pre computations for the coordinate system transforms
Patrick Cozzi, Cesium: As for when you start bringing this into AR the precision becomes super important.
Patrick Cozzi, Cesium: Just when you're overlaying that physical world, like say it's a infrastructure construction project and you want to put the H back in
Patrick Cozzi, Cesium: But it's only just been only just been framed out and there's there's two components here, one is getting and for other folks on the panel to discuss is getting really accurate GeoPose position and orientation.
Patrick Cozzi, Cesium: And then the other is the rendering strategies to to overlay it overlay it slash also
Patrick Cozzi, Cesium: z-fight or depth tested with the with the physical world. So this week, I can seed the conversation for others to pick up
Christine Perey: Yeah, I want to piggyback on that. I think the question of accuracy, Jan-Erik mentioned he touched on it that if we have a spatially mapped world that we can
Christine Perey: Compare to the users camera, what's being what sensing in real time, that will go a long way to combine
Christine Perey: The visual processing with the sensors like GPS and compass. So the combination of visual re localization and using the native sensors is is a key strategy going forward is no doubt about that.
Christine Perey: But then on the other hand, there's a lot of there's a lot. Just a lot of noise in the world right, big blocks of
Christine Perey: Metal and water bodies of water, terrible, and there's a lot of noise visually as well, are there any mirrors or and he reflective surfaces. They can reverse
Christine Perey: The readings. So we're not out of the woods. I think from that perspective, the second part of that question was how to anchor the digital assets on the physical world. Really at this time. There are
Christine Perey: Just a few authoring environments that allow you to do that. And you can anchor to a floating object like a traditional marker or an image or something that can move around, or you can anchor to a physical
Christine Perey: Location that's been defined in the authoring environment. I think we're at a stage now where the innovation and the kinds of things that people are doing with augmented reality is a bit held back by
Christine Perey: The, the just, it was just too few authoring tools. Everyone is using Unity and it's a great tool, if you're a game developer and you know how to use it already.
Christine Perey: And it's fantastic for prototyping and creating tests, but at the, in the long run for scaling, you can't have an app for every single object or every single orientation or place. So we're going to, I believe, see more
Christine Perey: Diversity, we're certainly looking for that from the web community from the 3D web community.
Ada Rose Cannon: I'd like to follow that up with what's currently available or at least being worked on in the web platform today.
Ada Rose Cannon: To work on this. So with regarding geospatial
Ada Rose Cannon: There is a an early
Ada Rose Cannon: Incubation in the immersive web working group, called geoalignment, I've pasted the link into the Gitter chat.
Ada Rose Cannon: And here's the link to the readME. So you can see this is what the current status of this looks like and getting feedback for this API would be very valuable.
Ada Rose Cannon: It's not being super actively developed at the moment.
Ada Rose Cannon: I think the last commit to it was made in August.
Ada Rose Cannon: But it's definitely something which we are in, in, are very interested in working on
Ada Rose Cannon: With regard to the second part of your question and anchor. Like, how do you anchor a 3D model to a space in
Ada Rose Cannon: On the, in AR we actually have another API called anchors.
Ada Rose Cannon: Which I've put link to this one is a lot further developed and has implementations in the wild.
Ada Rose Cannon: And this
Ada Rose Cannon: This is for anchoring
Ada Rose Cannon: The it lets you define an anchor based on a real world position and then it will keep that anchor on top of that real world position so that you can place your 3D models inside
Ada Rose Cannon: That particular frame of that anchor to keep it in in place, and that's where we're at today. That one is coming along pretty well actually
Jan-Erik Vinje: That sounds like a good starting point, because we have an extended view of GeoPose, we allow for
Jan-Erik Vinje: connecting two different frames of reference. So obviously the GeoPose is itself the basic GeoPose, as we define it. This has a starting frame that is anchored to to geospatial frame of reference, but we we won't allow for transforms between
Jan-Erik Vinje: Between different local frames of references that can be nested within each other and every AR rendering, it's a case of that because they always use a Cartesian coordinate system for the rendering
Jan-Erik Vinje: And within that cookies and coordinate system, you will have a an anchor that you might want to to
Jan-Erik Vinje: Add an object to and there's a way to to go from that anchor inside the Cartesian frame of reference.
Jan-Erik Vinje: And all the way back and expressing the position we work position and orientation of that anchor as GeoPose outwards, you could transform it into the other.
Jan-Erik Vinje: And the other way around. So both ways, should should be entirely possible to do within the large conceptual framework of the GeoPose spec.
Jan-Erik Vinje: Yeah, that was
Jan-Erik Vinje: My comment on that.
Ryan Ahola: Okay, thanks. Thanks everyone for those for those great answers. And I guess I just wanted to pick up on something Christine mentioned for because there's a related question related to
Ryan Ahola: Gaming with a comment in the Gitter chat, but gaming platforms really have a full system
Ryan Ahola: Design that really shines by enabling the integration between the gaming environment and interaction with with their users, of course.
Ryan Ahola: Of course, accessibility is certainly an issue in some aspects of gaming. But there still is very good integration with with users. So the question is related to how does
Ryan Ahola: Current development on on our side of the fence, on the geospatial side in the website aligned with the gaming industry. And I guess how we might be able to leverage their, their efforts to to help in the work that we're interested in.
Jan-Erik Vinje: Yeah, I gotta jump back right in there because, you know, I just spoke, and there's a need for it. We're in these early phase of this technology, there's a need for R&D and
Jan-Erik Vinje: Christine mentioned that we currently we're at the point where we are you relying on visual positioning to have accurate GeoPose
Jan-Erik Vinje: And that is very brittle those cameras. They have low, typically on our devices, they have low dynamic range. And if the lighting conditions are poor, things can fall apart.
Jan-Erik Vinje: And we need to have like multimodal developments in the future. So I suspect we have, you know, multiple decades of R&D with different kinds of sensors that could work together to provide
Jan-Erik Vinje: By GeoPosing in more robust way so that when the weather is bad or the lighting is bad or other conditions change, you could you could
Jan-Erik Vinje: Obtain your pose and do continue to do real world spatial computing and display things correctly in in challenging conditions.
Jan-Erik Vinje: So the gaming environment, just like the AI world. And just like the the autonomous driving worlds is is utilizing game engine technology to simulate their systems, basically as autonomous vehicle for instance is a
Jan-Erik Vinje: It's a look at it like a device with sensors and
Jan-Erik Vinje: All the major
Jan-Erik Vinje: Organization are working on developing that field they they simulate autonomous vehicle in a in a game engine to to see how
Jan-Erik Vinje: Like synthetic sensor data can be handled by their autonomous driving systems and they can tune the weather, the lighting and the behavior of other vehicles.
Jan-Erik Vinje: And test those really edge cases. So the same thing applies to our AR clouds technology area like we we should have similar tools to to develop our field.
Thomas Logan (Equal Entry): I'll in with them.
Patrick Cozzi, Cesium: Go ahead Thomas.
Thomas Logan (Equal Entry): Yeah, just gonna say like, at least for
Thomas Logan (Equal Entry): Staying on my audio theme for today 'The Last of Us' Part two is a game that
Thomas Logan (Equal Entry): Kind of famously has been able to be completed by people who are blind and it has tons of spatial information and it uses kind of a novel sonar and kind of sound notifications to know where objects and things are so I think
Thomas Logan (Equal Entry): From that standpoint of, like, looking at this professional game implementation and knowing that people have been like crying with delight about how well it works. I definitely think the gaming industry is
Thomas Logan (Equal Entry): A good place for inspiration to being like as you design like fully complete experiences like validating users, making sure that it actually does.
Thomas Logan (Equal Entry): Work and and so on. I referenced that game. It's right now, it's kind of like one of the most access accessible exciting things and it does do the 3D, 3D audio spatialization in innovative ways.
Patrick Cozzi, Cesium: Cool. Yeah, I just like to add, so I really believe in order to advance AR for maps and to advance 3D geospatial in general.
Patrick Cozzi, Cesium: We really need this collaboration. It's at the intersection of games and graphics and and and
Patrick Cozzi, Cesium: And geospatial right because if we're just coming at it from a geospatial from a map world. I mean, when you go to 2D to them 3D like 30 is not 2D plus 1, right, it's a whole new world.
Patrick Cozzi, Cesium: And then likewise when you're a game engine and you want to add global scale high precision real world coordinates and working with that massive data, you have to bring in a lot of the geospatial type types of techniques.
Patrick Cozzi, Cesium: So one example is, at Cesium, we've been collaborating closely with with Epic Games to build
Patrick Cozzi, Cesium: 3D geospatial about right on top of Unreal Engine right Cesium for Unreal. Unreal does the fundamental rendering and the lighting,
Patrick Cozzi, Cesium: Which is incredibly amazing and sophisticated and then on top of that, Cesium does the streaming for the 3D tiles. So based on The View, it brings in the you know the right real world content.
Patrick Cozzi, Cesium: Using the global position and and I think we're really just at the beginning of this kind of fusion of gaming and geospatial
Ryan Ahola: That's very, very interesting. I just wanted to note for for everyone that Christine unfortunately had to go to another meeting for, that's why, that's why she's dropped off the call. But if she was is the recording.
Ryan Ahola: Just wanted to mention to say thank thank you to her for presentation earlier in her participation in the panel. I think we have about eight minutes left in the panel. So we have we have time to do a couple more questions.
Ryan Ahola: So one question that came up was, I guess we've been talking mostly about probably future states of AR and also 3D components of that. So there's a question relating to how to get to, to that state and whether
Ryan Ahola: It would be beneficial, at least a starting point on this this path or going down for augmented reality, if the web platforms supported
Ryan Ahola: 2D map rendering in HTML and which is what map, the MapML proposal is aiming to do. So I guess if there any comments on that as whether that's at least a starting point for how to enable this new future. That would be it would be interesting.
Jan-Erik Vinje: And thumbs up from me, you know,
Jan-Erik Vinje: It's better to start with that and and I hope we can build on that, if we have a native and map elements that starts out that's as 2D then next thing is to extend it. So you could have the globe maps and the immersive modes later on. So,
Jan-Erik Vinje: Get on with it. That's would be wonderful, yeah.
Ada Rose Cannon: If you, if you want to be the change you wish to see in the world, a building a custom Web Components, usually a great place to start.
Ada Rose Cannon: And that's
Jan-Erik Vinje: Actually, actually. Yeah, I did one when I got my job in Norkart at my
Jan-Erik Vinje: When I had my interview I made a web component using that like
Jan-Erik Vinje: One of the early libraries for creating Web Components and it looks very much like the Web Components and is proposed where you see the latitude
Jan-Erik Vinje: And longitude as parameters and when I moved around in the map those parameters were updated. So yeah, I've been on that track, some, that was 2D maps.
Ada Rose Cannon: It would be interesting if there was a native element. So currently Web XR which is the only way to do AR in the web.
Ada Rose Cannon: There's no way right now anyway to draw HTML elements into into Web XR because it's purely Web GL based
Ada Rose Cannon: So if there was to be some kind of native HTML element, the best way for it to to be able to get into Web XR would be if it was able to expose some kind of
Ada Rose Cannon: Of buffer, which could be loaded into the GPU to be used.
Ada Rose Cannon: With Web GL or YPL2 or Web GPU.
Ada Rose Cannon: So that
Ada Rose Cannon: So that the that particular web engine you're using could then take that content and do smart stuff with it to lay it appropriately in your environment.
Ryan Ahola: Right. No, no, thanks for your thoughts on that. Yeah. I think it'll be, I think, as, as we
Ryan Ahola: Move down this path for for augmented reality as it becomes common part of everyday life, eventually, we are going to be asking why the web platform doesn't support
Ryan Ahola: 3D and augmented reality natively. So it's probably interesting idea to approach the 2D Map component first and then have something to build on for for 3D components.
Ryan Ahola: And we have one, one other question from the, from the audience.
Ryan Ahola: In consideration of Jan-Erik's earlier earlier presentation.
Ryan Ahola: Whether you think that the layer terminology is confusing and 3D or 4D environment.
Ryan Ahola: Wondering if a term of view is better or if it has the same same problem. So I guess the question about terminology.
Ryan Ahola: Maybe you Jan-Eri,, since this is your presentation. If you'd like to comment on
Jan-Erik Vinje: Yeah, people are routinely asking if layer is appropriate. So
Jan-Erik Vinje: For me it's always been intuitive, but when I understand that not everyone feels the same way. So, and
Jan-Erik Vinje: I'm not sure about view, but there could be another word that sort of hits, hits the nail on the head. It might not be layer.
Jan-Erik Vinje: But that is very coming from map world and also from Photoshop, Illustrator, layers are you know, something that is sort of in the background as a concept, that is easy to to fall back on. But some deep thinking could reveal better concepts.
Thomas Logan (Equal Entry): Unity uses that layer concept here for laying out objects and
Thomas Logan (Equal Entry): You know setting
Thomas Logan (Equal Entry): Setting that on there so I
Ryan Ahola: Think at
Thomas Logan (Equal Entry): Least I've seen that terminology used there.
Thomas Logan (Equal Entry): In that environment as well.
Ryan Ahola: Okay, I guess we're down to about three minutes left on the panel. So I was, I think we'll just last one more question is kind of a closing question, but maybe everyone can can address
Ryan Ahola: So I guess I was interested in if there's one thing that you would like the attendees of the workshop to to help with or consider
Ryan Ahola: For maps and and 3D/augmented reality what it would be since we have this audience here today who actually has the capability to do something. What, what, what you would ask them, so maybe we can, if anyone has has an opinion right away, feel free to jump in.
Ryan Ahola: We can we can have some time for everybody.
Ada Rose Cannon: One thing which I'd really like is
Ada Rose Cannon: If people could take a look at some of the repos in the immersive web
Ada Rose Cannon: And and maybe even get involved if your company is is a W3C member
Ada Rose Cannon: And has to get involved in the work we're doing in W3C. See, because it would be really great to have the input of people with maps as a use case.
Ada Rose Cannon: giving their opinion on those former cameras turned off and getting their input on
Ada Rose Cannon: Whether or not the APIs we're designing will fit your use cases. And I think we definitely do not want to preclude augmented reality maps from working in the web.
Thomas Logan (Equal Entry): I'll just add that in 2D maps for my kind of world has already been like a constant challenge from when I got into this world of accessibility. So
Thomas Logan (Equal Entry): I just put out there to like, really make sure people do care and consider the use case in 3D. Because, I mean, we still have a lot of challenges really considering just the 2D solutions for people with disabilities and
Thomas Logan (Equal Entry): I'm very excited about what can come, but um I'll just put it out there that I'm someone that's always interested in like trying out people's
Thomas Logan (Equal Entry): You know, prototype for things they're working on. So I'm just definitely think of me. It's like someone that likes to try out samples and can like help you with the user scenario generation.
Jan-Erik Vinje: If I can chime in, I mentioned in my presentation that Open AR cloud is working on designing and implementing a reference open spatial computing platform and, what is more, we are actually trying that platform out in a city scale, starting with
Jan-Erik Vinje: Two European cities. So there's the city of Bari in Italy and the city of Helsinki, Finland and there might maybe during this autumn. There might be a few more cities as well. And we are seeking people who wants to participate on experimenting with what could be seen as a
Jan-Erik Vinje: Prototype open spatial web platform consisting of standards and protocols, all the standards and protocols are sort of
Jan-Erik Vinje: In flux and work in progress. Nothing is fixed, so
Jan-Erik Vinje: Is there a true like R&D kind of thing but have to bring grand scale. Bari, for instance, has hundred square kilometres that has been mapped so that you could obtain your GeoPose over there. So that would allow for kinds of use cases that span the city.
Jan-Erik Vinje: So that is one aspect that obviously you can go to our GitHub, opening up our GitHub repo and look at all the different projects there.
Jan-Erik Vinje: And most of them are related open spatial computing platforms. I encourage you to look at look into those and also encourage you to join Open AR Clouds, participate in our working groups and our meetings on the open spatial computing platform.
Jan-Erik Vinje: I think we're onto something very exciting for us to live by, like, like 1989 where Tim Berners Lee, like we're trying to invent the web with some some standards and protocols and some implementations
Jan-Erik Vinje: And now we have this new opportunity for, for the spatial era. And this is not like a one one man show this is something we're trying to bring a global community together to solve these challenges, not only the technical, but also the ethical around privacy and security and user control.
Patrick Cozzi, Cesium: Yeah, then just to kind of round things out, piggybacking on that global community comment right so I do believe that AR hardware, you know, is not a mature and become widely spread right for both personal and professional use cases.
Patrick Cozzi, Cesium: And software folks, you know, we should be building now right and we need a combination of platforms that will kind of enable everyone to build
Patrick Cozzi, Cesium: On top of, and and standards for interoperability. So I would just put a very broad call to action for, you know, anything that you can do to contribute to open standards, whether it's W3C, Khronos or OGC.
Patrick Cozzi, Cesium: Anything from from use cases to actual spec work implementation, it's really going to help lift up the whole community and bring AR forward as fast as we can.
Ryan Ahola: Great. Thanks. Thanks everyone for those for those those last answers to that final question.
Ryan Ahola: So I just like to thank everyone for participating on the panel today I thought it was really very interesting, especially for
Ryan Ahola: For myself as a stereotypical geospatial user doesn't really know anything about augmented reality or 3D it was really illuminating to see the future of the work that we're going going towards. So thank you everyone for your excellent participation.
Ryan Ahola: So I guess the last portion of our session today is is supposed to be a breakout session related to maps of objects specifically GeoPose
Ryan Ahola: For web maps which I believe is being led by Jan-Erik
Ryan Ahola: Vinje again.
Ryan Ahola: I just wanted to check with our program committee members on because I believe we're, we're a little over time today. So I don't know if there's any considerations around that put the or if we can just move ahead with that with the breakout session.
Amelia Bellamy-Royds: I think
Amelia Bellamy-Royds: We can just continue on in the same space. I think our captioner is dropping off but beyond that whoever is interested in the breakout it'll just continue on in the zoom conference system.
Ryan Ahola: Great. Thanks, thanks, Amelia, so I guess what that we're free to continue. So Jan-Eri,,, you're, you're free to go ahead
Ryan Ahola: Okay.
Jan-Erik Vinje: Well, with that, you know, being
Ryan Ahola: respectful of
Jan-Erik Vinje: people's time we might try to not spend the entire 30 minutes
Jan-Erik Vinje: So maybe just just open the floor to questions and suggestions.
Jan-Erik Vinje: And where you're able to facilitate that so everyone who raises their hand can chime in. Is that so?
Ryan Ahola: Yeah, that's, that's
Ryan Ahola: That's fine. Oh my, I think we, we have a fairly small group now so if if people want to ask their questions in the Gitter channel that's, that's fine. I'm happy to read them out or if people just want to
Ryan Ahola: To speak at the zoom session that should that should be fine. I think it will be okay.
Peter Rushforth: Awesome. It's been a real challenge staying quiet for that long. Just kidding,
Jan-Erik Vinje: Well, feel free. You know,
Amelia Bellamy-Royds: Okay.
Amelia Bellamy-Royds: So to get things started.
Amelia Bellamy-Royds: We've got people coming from geospatial and thinking of data formats.
Amelia Bellamy-Royds: You talked about GeoPose as a conceptual thing. But how do you actually integrate that in
Amelia Bellamy-Royds: To like, your GeoJSON or your other map data format.
Jan-Erik Vinje: Yeah, so
Amelia Bellamy-Royds: GeoJSON
Jan-Erik Vinje: Does
Jan-Erik Vinje: Sort of was an inspiration for GeoPose, because it was so simple and easy, easy to comprehend just looking at it was compact. It wasn't a massive overly complex XML structure. So it's been very popular in the in the
Jan-Erik Vinje: Web Map community. I think we use it a lot in my workplace we store map data sometimes directly as a GeoJSON
Jan-Erik Vinje: For instance, we have this where we're we're allowing users to draw features and save them and use them later that there was a good case for GeoJSON, GeoJSON even though it supports to the new 3D coordinates
Jan-Erik Vinje: And it's very much based in the GIS map data paradigm that has
Jan-Erik Vinje: A lot of coordinates in in, for instance, latitude, longitude and
Jan-Erik Vinje: The pose itself. If you want to have a pose, you, you, it's hard to work in latitude, longitude. That is why the basic GeoPose sort of creates a local Cartesian coordinate system that is tangential to to the ellipsoid at that
Jan-Erik Vinje: at that position, at particular coordinates.
Jan-Erik Vinje: So we'll show you might be able to to add some attributes into a GeoJSON feature that could express the orientation of something at that point.
Jan-Erik Vinje: So that could be sort of grafted directly on top of GeoJSON
Jan-Erik Vinje: And there's not a standard, you might want to like add some sort of extra standard to GeoJSON that could support pose and
Jan-Erik Vinje: The way we encodes in our in like, it's not published yet, but the way in our OGC standards working Group for GeoPose, the way we encode it for basic to GeoPose implementation targets
Jan-Erik Vinje: And points towards a an encoding format as, as JSON that is even more succint than GeoJSON but that can be that because it's more
Jan-Erik Vinje: It's a simple concept. It's not all kinds of lines and polygons and everything. You don't need to have a lot of structure to separate but it's only pose and then you, you have a simpler object.
Jan-Erik Vinje: Yeah.
Jan-Erik Vinje: Does that answer some of the question.
Amelia Bellamy-Royds: Yes, thank you. Getting more ideas of what this would look like.
Amelia Bellamy-Royds: If, at the risk of dominating the conversation. Another question is, are there devices that currently can record this information, like if I'm taking a photograph.
Amelia Bellamy-Royds: Most phones can give me GPS coordinates of where I was standing, but can they give me the pose information that you can later reconstruct what direction I was looking at when I took that photograph.
Jan-Erik Vinje: Well,
Jan-Erik Vinje: I'm not an expert on like EXIF data, but there's slow metadata you can have associated with a picture. So maybe there's someone else who knows if you capture like like and geomagnetic sensor data and that kind of thing.
Jan-Erik Vinje: And I don't quite know if that is part of taking a picture from smartphone today.
Jan-Erik Vinje: Should be possible though.
Jan-Erik Vinje: Oh yeah. No one here seems to know,
Amelia Bellamy-Royds: Anybody else played around with
Amelia Bellamy-Royds: The device sensors and
Amelia Bellamy-Royds: Figuring out what your camera's looking at?
Jan-Erik Vinje: So what you could do. You could if you are in an AR context and you're speaking with a
Jan-Erik Vinje: Visual positioning service that has a very accurate position and orientation that it returns. There are a couple of service providers that provides
Jan-Erik Vinje: Geo spatial position orientation already. And then you could have you can sort of bundle the image, together with
Jan-Erik Vinje: That kind of extra metadata if if it's not supported directly in EXIF, you might be able to do that. So you take a picture and you're in an AR context and you obtain a pose from when you took that picture that should be entirely possible to annotate picture with...
Doug Schepers: Very slightly meta off topic, but I just wanted to note that EXIF is only one format and believe it's only supported in JPEG and a few other formats. There are lots of different
Doug Schepers: Metadata formats for raster data.
Doug Schepers: There was an attempt a few years ago at W3C to unify them, but it didn't really go anywhere. There was a spec that was produced, but it didn't really go, it wasn't really adopted, I think that if we're going to that is actually a fecund ground for
Doug Schepers: For standardization is actually starting to look at how this metadata could be encoded in raster images if we're gonna if we wanted to do something like that and that could also
Doug Schepers: Go into the annotation stuff that was talked about earlier. And there's a lot of. There's a lot of, it's fertile ground there.
Doug Schepers: But, but there's still also some challenges.
Jan-Erik Vinje: The algorithm used by visual positioning always allows that you if you have a good map of an area.
Jan-Erik Vinje: You can always obtain that after the fact. So if you come back three days later, have a photo and you know the general area and the local map and you could run an algorithm and obtain your pose after the fact.
Jan-Erik Vinje: That could be done for historical images. You know, it's more challenging because things might have, more things might have changed vegetation, paints on walls and in buildings that have been removed all that, but
Jan-Erik Vinje: Certainly possible, possible to do with historical images as well,. Well, we see some here apps that are doing that, where there's movies from the from standing camera somewhere, and they put that movie into an AR experience.
Jan-Erik Vinje: You move to the right location and you get some sort of
Jan-Erik Vinje: Window into the past into the real world.
Jan-Erik Vinje: A fun experience, it seems.
Jan-Erik Vinje: Yes, I think.
Jan-Erik Vinje: I'm not gonna hold everyone here and just to see if
Jan-Erik Vinje: The questions and comments are sort of starting to to to dry up. Now I am
Amelia Bellamy-Royds: Do you have any
Amelia Bellamy-Royds: Demos that you have something you can actually show us more than just the static images from the slides.
Jan-Erik Vinje: Well, where you can Google
Jan-Erik Vinje: To our partners. If you Google Immersal and augmented city, you will be able to see some of
Jan-Erik Vinje: The ways they can overlay stuff into the, into the city scale use cases.
Jan-Erik Vinje: So that is definitely one thing you could do, and there will be more demos pretty soon from the test beds. Where, where, where, hopefully, that is a demo of
Jan-Erik Vinje: Similar things but then running on the platform running on on the preliminary GeoPose protocol using the early version of the GeoPose and coding format.
Jan-Erik Vinje: So it's, it's something that could scale in a different way than having a number of different service providers with idiosyncratic formats and protocols.
Amelia Bellamy-Royds: Okay, that's great if you could put the actual names of those companies in the Gitter chat later so it's easier for people to find that would be great.
Jan-Erik Vinje: So just go to YouTube and find those partners, those, those are the two companies that are running the two first tests. That's Immersal is running the one in Helsinki and Augmented Cities running the one in Bari
Doug Schepers: I have a question. It's a little speculative.
Doug Schepers: But since we have time. I may as well ask it.
Doug Schepers: I had been thinking about maps, I'll be honest, as sort of 2D objects and I recognized that there's an intersection with XR but I hadn't really thought too deeply about it but. But the more I think about it. I do wonder
Doug Schepers: If it might not be a good idea to start with something more ambitious that enables a broader set of use cases.
Doug Schepers: Including, you know, not just traditional a traditional flat digital maps. But, uh, but 3D
Doug Schepers: With geo perhaps with GeoPose, perhaps with a yeah, I, it seems to me like having GeoPose in XR is pretty relevant to an emerging web, sort of, that has a AR and VR.
Doug Schepers: And it seems like maps could be folded into that as a subset of that, am I thinking about it wrong, or do other people feel like maybe maps by itself is
Doug Schepers: Is actually only a subset of what we could do with AR VR, GeoPose, 3D environments.
Peter Rushforth: I'm sure you're right but you've got to start somewhere, but I'm, I shouldn't be jumping in on Jan-Erik's session but you got to start somewhere and 2D is like
Peter Rushforth: I mean I wouldn't call it simple, because I'm not, I'm not one of the geniuses inventing the 3D world.
Peter Rushforth: And 2D already seems pretty hard to me, especially in the in the in the notion of
Peter Rushforth: Making the content, making the content responsive.
Peter Rushforth: To the, to the container like in in 3D and like in reality the container is the person, right, but in in in the 2D Web Map, the container is a rectangle on the screen.
Peter Rushforth: And it seems to me that they're, you know, they're connected obviously nowadays with the mobile web and so on.
Peter Rushforth: But we haven't, we haven't even got the, the browser rendering even basic 2D stuff in that rectangle for us yet. It's all done by web developers, right, like so. I just, I would like to see the plan for integrating this into a web that people can use, you know that
Peter Rushforth: school kids can learn because you know I'm hearing a lot of high tech stuff here and
Peter Rushforth: You know, where does that come back to
Peter Rushforth: You know what I mean, like,
Peter Rushforth: Yeah, HTML, so far as I can see no where
Peter Rushforth: Except in the div
Doug Schepers: Sure.
Jan-Erik Vinje: As I would think both perspective are very good points. So, this is this is like a classical thing in software engineering like you want to like do the do the
Jan-Erik Vinje: Redesign and refactoring. And you can try and like maybe just come up with, let's throw the things we already did away and start from scratch. And that allows for
Jan-Erik Vinje: Some things to be easier. But then all the stuff you did with the old old paradigm and suddenly comes back and
Jan-Erik Vinje: We usually do this. And now we can't do that anymore. How can we bring that into it. So then you always end up with a long journey. Anyways, where you have to
Jan-Erik Vinje: Bring pulling stuff from the, from the old paradigm into the new and you have the opportunity to like have a gateway and say, say this is redundant, we don't we don't actually need it. This was totally meaningless to try and bring this into the future paradigm that might be
Jan-Erik Vinje: The advantage of doing it that way. And the big problem is that when you like. Throw everything away and start from scratch, it tends to take a long time before you have something that is remotely as usable as what you originally have so
Doug Schepers: That's true, And also you you miss you, you lose the community of practitioners from the older tradition.
Doug Schepers: That, so you there's there's a skill loss there. There's a not skill loss, but there's a
Doug Schepers: There's a people loss as as people who are very knowledgeable about one area,
Doug Schepers: might not feel comfortable going into something that's too radically different.
Doug Schepers: And also thinking, I'm just thinking about it from a pragmatic perspective.
Doug Schepers: There's a
Doug Schepers: There are certain controls that you would expect to have in a map, zoom, pan.
Doug Schepers: There's the notion of tiling which I, as far as I can see, is not really present in AR VR 3D worlds.
Jan-Erik Vinje: Oh, you will be wrong, tiles
Doug Schepers: Oh, well, there I am.
Jan-Erik Vinje: Tiling is very useful in the 3D world as well.
Doug Schepers: Well, then, then
Doug Schepers: Then the put the pan and zoom seem also applicable. So maybe, maybe we should dig into the similarities between the
Doug Schepers: You know, the, the, the worlds in order to make sure that even if we start with something simpler, though not simple, maps.
Doug Schepers: That it it is amenable to extension and in into 3D environments.
Jan-Erik Vinje: Yeah. Actually interesting, I had three stages few hours ago with them knowing an AR prototype for for someone and
Jan-Erik Vinje: And one thing he asked her out of the blue. I had no idea would ask for that. But he was, how could you zoom in this. And obviously, you know, if you have originally a Web Map and have zoom controls.
Jan-Erik Vinje: And you can if you then extend the original web maps to an immersive map and you still have zoom control what you would have to do is sort of
Jan-Erik Vinje: Take the same concept, but now instead of looking down at a map and zooming into the the map the sort of 2D map that you're looking down on,
Jan-Erik Vinje: You now you're looking with your pose and you will assume along your pose and and it's a similar concept, but you could leverage the same control and have similar user interactions for for it.
Doug Schepers: And having a native zoom, pan mechanism in the browser
Doug Schepers: Whether that's an API or what, however, the interface goes that would help with accessibility across different mediums.
Doug Schepers: Because you could you could expose it as a standard control rather than just as a button, whatever. Yeah.
Jan-Erik Vinje: With an API much better with an API. Exactly. Yeah.
Doug Schepers: Thanks I appreciate you indulging me in that speculation.
Jan-Erik Vinje: [laughs] Okay.
Jan-Erik Vinje: I think I actually, I don't think I should also soon leave this, to have some dinner. And we probably quite some time over time. So thank you everyone for for hanging around for this interesting
Jan-Erik Vinje: Discussion and hope to hope that people from this community touch and bump into each other.
Jan-Erik Vinje: For for bringing spatial computing forward.
Jan-Erik Vinje: Bye bye.
Doug Schepers: Thanks, everyone.
Ada Rose Cannon: Bye
Ada Rose Cannon: Thanks for having me
Peter Rushforth: Thank you very much Ada Rose
Doug Schepers: Thank you Ada Rose
Ryan Ahola: I just
Ryan Ahola: Wanted to thank everyone for participating today. And I guess just a reminder for everyone. So, tomorrow we will be continuing the workshop, the session starts at 12 o'clock.
Ryan Ahola: Eastern time, which I believe it's 1600 UTC. Um, so we will pick up again tomorrow, so we'll see everyone again tomorrow. Thank you.
Doug Schepers: Thanks for moderating Ryan.
Ryan Ahola: Thanks. All right.
Bryan Haberberger: Good job, everybody. Thank you. Thanks.
Ted Guild: Boom, boom, boom.