W3C

Automotive Working Group Teleconference

28 Feb 2017

See also: IRC log

Attendees

Present
PatrickB, Hira, Ted, PatrickLue, Urata, Paul, Fulup
Regrets
Chair
Paul
Scribe
Ted

Contents


Paul: I believe we were planning on focusing on access and the data
... there was interest in notifications but that hasn't been released as part of viwi yet

PatrickB: they are not yet released but think we can in another week or so. in the meantime we can look at media
... very different from vehicle signals

https://www.w3.org/Submission/2016/SUBM-viwi-service-media-20161213/

<paul> https://www.w3.org/Submission/2016/01/

https://www.w3.org/community/autowebplatform/wiki/ViWi

PatrickB: we shared these descriptions in Burlingame based on this json format file
... here we are talking about the media library service

https://www.w3.org/Submission/2016/SUBM-viwi-service-medialibrary-20161213/

scribe: we always have a service (media library), resource and here you can see artist, song, album cover
... under medialibrary/tracks i can find all my local music indexed
... you could do a GET on all tracks
... a track has a name, id, uri, it can have an image, links to other items such as folders, artist, album

[demonstration on webex shared screen]

PatrickB: as you can see this is all condusive to REST and HTTP
... it is intuitive to web developers

Paul: how did you come up with this structure, where there any influences?

PatrickB: based on common web apis
... pretty much most music services use a similar model

Paul: I could see slight variations if people are allowed to bring their model to this architecture could be problematic

PatrickB: all the parameters mentioned are optional, sometimes you might not have album information
... we have this implemented at present
... you have a resource level end point that you can GET at the end
... POST can be used when relevant
... it is possible to use to adjust ID3 information on a track
... you can subscribe to a resource query and get a socket back

Fulup: I was comparing with Spotify's and it is pretty similar
... in your context how to intend to interact with those various vendors? they will adopt your api, you to theirs, a translator...

<fulup-iotbzh> For info Spotify API is here

<fulup-iotbzh> https://developer.spotify.com/web-api/endpoint-reference/

PatrickB: we built adapters
... Spotify can register itself as a source and put it in the medialibrary
... it could handle queries and bring back resulting links
... it can easily be integrated into a media browser library

Fulup: this means you intend to have a server on the car responding to this api or in the cloud?

PatrickB: it could be either, just need the port

PatrickLue: we can have a proxy in the vehicle as well

Fulup: presently we are using @@1 in AGL reference implementation

PatrickB: there is different levels of involvement, some things are being handled by Tier 1s
... for some systems it is our developers, and sometimes we hand specs to a supplier of what we want
... here we had LG implement

Fulup: I would need to propose a reference implementation within AGL

Paul: it needs to be proved to work for them to adopt it
... the code needs to be able to run against our test suite
... for AGL they need some [linux] distro with runnable code

Fulup: if I cannot proove that then I cannot get it accepted at AGL

PatrickLue: can we point to production vehicles as proof?

Fulup: we need workable code as a starting off point

Ted: need for implementations doesn't come up until later for W3C's spec process

Paul: you want something that people can run on as a development environment

PatrickB: we have only opened up a more substantive than the mock server we provided to in house developers and third parties
... we build HMI and applications in mock server environments and then bring together into a test vehicle
... with this model instead of multiple RPC calls you can send all the parameters you want, track, time offset, volume....

Paul: what is the feeling of other people on the call. how do we unify signals and media or do we choose not to?

Fulup: we would need to see if we could get this could work with our identity model
... these are very different services with different requirements
... at a low level they can use the same mechanism

PatrickB: there are use cases for wanting to be able to subscribe here too

Urata: This media library's data model could be integrated into vss data model too.
... I noticed is, in case of vehicles you have only 2, 4 or so of same type of objects such as doors, wheels etc.
... That is very different from a large media libraries in which thousands of same class object.
... I'm not sure if it is good thing to have vss and media integrated together. But of course integrating is possible. this library's data model

Paul: I hear need for two models and separation

Fulup: there is clearly a very different strategy on structuring the data between VSS and ViWi
... it would be much easier to map to ViWi in AGL at present but both are possible
... we could decide to use a similar approach in handling media

Paul: you presently allow for extensions and can have different data models
... due to the varied cars
... do you expect to differences in media models?

PatrickB: you need to design your api to be flexible as Urata was saying. all the doors have the same attributes like windows that can be lowered (or not)
... you can access all or a particular door. this works well with object representation

Urata: Thank you

PatrickLue: we should discuss next topic before we adjourn

Paul: Fulup brought up questions of the auth model

PatrickB: you can send it the same way you would an external service with tokens

Fulup: yes but there are more complicated with the different services (spotify, iheart...)

PatrickLue: they do vary

Paul: use cases are very different for signals and media services

Fulup: I want to know how go to the lower level signals as that approach is too varied

Paul: I am not against it

PatrickLue: it seems out of scope. the lower level protocol is so varied by manufacturer and why we are even trying to coordinate at a higher level
... we can provide examples

Paul: we have some of these sorts of conversations with Genivi

PatrickB: we many different platforms across VW Group
... CAN vary widely, some have MOST
... different from model year even

Fulup: agree it is very hard to try on a lowel level

Paul: we need two implementations for W3C process and those could be AGL and Genivi

PatrickLue: I don't see a clear path yet towards standardization

Paul: this tf is to make a recommendation to the WG on possible path forward. we have a few technical topics worth covering such as auth

Ted: notifications and identity
... and then start on proposal

PatrickB: we could decide to advance some of the other domains such as media while VISS progresses

<fulup-iotbzh> AGL adopted OpenXC for CAN low level signal mapping. We need something to implement application high level API. As today ViWi looks the easiest path as it is very close of current AGL model. If people are interested in proposing code, I would be more than happy to work on a draft implementation.

PatrickB: including possibly an implementation

Ted: we aren't required to produce an implementation ourselves but could

Paul: if people are willing to implement then you can more easily entice others

Fulup: that is something we can collaborate on

Paul: I encourage people to put together their thoughts for a proposed path for the next call

Urata: I wanted to discuss scheduling
... we can perhaps discuss integrating VISS and ViWi at the Genivi F2F meeting in May
... there we can decide our strategy going forward
... (about 2Generation spec, not about current WebSocket based spec. To avoid misunderstanding :-) )

Paul: correct, that was the timeline people found acceptable

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.148 (CVS log)
$Date: 2017/02/28 16:02:36 $