W3C

- DRAFT -

SV_MEETING_TITLE

13 Nov 2013

See also: IRC log

Attendees

Present
Masao, Sam_Sugimoto, Olivier_Thereaux, Wenmei, Gao, Israelh_(Microsoft), haijun_liu_(ZTE)
Regrets
Chair
Ryoya Kawai
Scribe
Ian

Contents


Meeting; Entertain Web with Musical Intstruments

….topic is midi API…some activities in Japan, and what's next

…we are planning a hackathon with people doing audio API

<cwilso> (and cwilso on phone)

Ryoya Kawai

Masahiro Kakishita

Ryoya: We have audio API and will have midi API. Web browser also has access to camera, mic, so a great environment for music

…purpose of web midi API is to allow people to manipulate musical instruments from directly within the browser

[History of MIDI protocol]

Important points about midi:

- has velocity

- easy to create sequence data

- not just for music

RK: Midi market is for music is about 11.8B USD

midi for ringtone is 10B USD

midi for Karaoke 7.5B USD

[Demo]

(Demo is of Web MIDI API)

Using: Chrome and a polyfill

…external keyboard + software synthesizer

…both are connected via the Web Midi API

…you can connect the midi device to the software system with 20 lines of javascript

<cwilso> the polyfill runs on top of an NPAPI plugin

<cwilso> (native code, runs on windows/mac)

<cwilso> used to use Java, but it was quite slow.

<olivier> gotcha

…the app reads standard midi files (demonstrated live)

[On activities]

RK: We have 100 devs in Japan who love music and Web…a community creating midi-based web apps and putting in github

We had a Sep 12 2013 event…https://www.youtube.com/watch?v=QcdppXSz2Ms

…that vid has had 2087 views

Olivier: To do interesting things, you want both the midi and the audio APIs…typically you use midi for control and audio for effects

<cwilso> thanks

<cwilso> ?

<olivier> back?

<cwilso> yes

RK: Also did a hackathon

…cool demo combining visualization of sounds, background images, music layers, .etc.

<cwilso> yeah,i can hear quite well

<cwilso> well, the demos less well, but some of them I know anyway

[Demo of using face detection and affecting visualization and sound when person doing demo opens mouth…uses getusermedia]

[Another demo showing getusermedia to look at a whiteboard and play sound based on looking at a grid with dots in various parts of the grid]

RK: People are already asking about when the next music hackathon will take place
... My company (Yamaha) just released a device for singing voice synthesis

[Demo showing combo of external keyboard + voice synthesizer usb stick + external speaker via Web app]

OT: How do people react to people to the idea of using all the other web APIs in conjunction with the Web midi API?

…do they use lots of the other APIs when hacking?

RK: Most people have been using midi+audio since we provided them with a lot of musical instruments.

IH: Who created polyfill?

Answer: Chris Wilson. :)

RK: I also created a web midi API wrapper

OT: If you want to deal with midi at a higher level than just numbers, you'll need libraries as well

RK: Lots of people were using the library I created at the hackathon

http://github.com/ryoyakawai/WebMIDIAPIWrapper

Discussion

- how can we build entertainment web with music?

- what should we do next?

- got ideas for next hackathon?

Olivier: One of the things we are facing is that the web midi API is not yet implemented in any browsers.

<cwilso> except chrome canary

Olivier: it's being implemented in Chrome, but right now you need the polyfill

…with NPAPI being sunsetted, there needs to be visible support for the idea of the web midi api for chrome team to keep pushing on their implementation

…we welcome public support for the work

IH: Who is motivating you to push the Web MIDI API?

…who is asking for the API to build apps?

OT: Midi manufacturers are quite supportive of the idea of connecting with the web

Masao: Any interest from Disney, and similar?

OT: I will interest you to Ted Leung

[game industry also interested]

<cwilso> thanks Kawai-san, for chairing!

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.138 (CVS log)
$Date: 2013/11/13 07:14:27 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.138  of Date: 2013-04-25 13:59:11  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

No ScribeNick specified.  Guessing ScribeNick: Ian
Inferring Scribes: Ian

WARNING: Replacing previous Present list. (Old list: Masao, Sam_Sugimoto, Olivier_Thereaux, Wenmei, Gao)
Use 'Present+ ... ' if you meant to add people without replacing the list,
such as: <dbooth> Present+ Israelh, (microsoft)


WARNING: Replacing previous Present list. (Old list: Israelh, (microsoft), haijun, liu(zte, corp))
Use 'Present+ ... ' if you meant to add people without replacing the list,
such as: <dbooth> Present+ Masao, Sam_Sugimoto, Olivier_Thereaux, Wenmei, Gao, Israelh_(Microsoft), haijun_liu_(ZTE)

Present: Masao Sam_Sugimoto Olivier_Thereaux Wenmei Gao Israelh_(Microsoft) haijun_liu_(ZTE)

WARNING: No meeting title found!
You should specify the meeting title like this:
<dbooth> Meeting: Weekly Baking Club Meeting

Got date from IRC log name: 13 Nov 2013
Guessing minutes URL: http://www.w3.org/2013/11/13-webmidi-minutes.html
People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


[End of scribe.perl diagnostic output]