13:57:12 RRSAgent has joined #webmachinelearning 13:57:16 logging to https://www.w3.org/2024/06/13-webmachinelearning-irc 13:57:16 inviting RRSAgent 13:57:16 RRSAgent, make logs Public 13:57:17 please title this meeting ("meeting: ..."), anssik 13:57:18 Meeting: WebML WG Teleconference – 13 June 2024 13:57:25 Chair: Anssi 13:57:30 Agenda: https://github.com/webmachinelearning/meetings/blob/main/telcons/2024-06-13-wg-agenda.md 13:57:39 Scribe: Anssi 13:57:44 scribeNick: anssik 13:57:44 jsbell has joined #webmachinelearning 13:57:54 gb, this is webmachinelearning/webnn 13:57:54 anssik, OK. 13:57:58 Present+ Anssi_Kostiainen 13:58:03 Present+ Joshua_Bell 13:58:06 Present+ Michael_McCool 13:58:09 Present+ Geoff_Gustafson 13:58:18 RRSAgent, draft minutes 13:58:19 I have made the request to generate https://www.w3.org/2024/06/13-webmachinelearning-minutes.html anssik 14:00:10 Present+ Joshua_Lochner 14:00:25 Present+ Dwayne_Robinson 14:00:53 Present+ Bryan_Bernhart 14:01:05 ningxin has joined #webmachinelearning 14:01:18 Regrets+ Ningxin_Hu 14:01:43 Present+ Ilya_Rezvov 14:01:43 asully has joined #webmachinelearning 14:01:50 Present+ Austin_Sullivan 14:02:11 dwayner8 has joined #webmachinelearning 14:02:13 anssik: please welcome Jan Williams representing TPGi 14:02:16 McCool has joined #webmachinelearning 14:02:23 ... and Sushanth Rajasankar representing Microsoft Corporation to the WebML WG! 14:02:31 ... I made a few agenda tweaks: 14:02:35 -> Agenda diff for new announcements https://github.com/webmachinelearning/meetings/commit/3650edf3173179631e11a09fd774095a2fef34fb 14:02:41 -> Agenda diff for issues addressed or closed https://github.com/webmachinelearning/meetings/commit/5b40fa9905d2f47d38322eaa163a2c6cb84f25ac 14:02:49 RafaelCintron has joined #webmachinelearning 14:02:52 Regrets+ Rafael_Cintron 14:02:58 Topic: Announcements 14:03:11 Subtopic: Utilities for spec editors 14:03:15 -> webnn/tools https://github.com/webmachinelearning/webnn/tree/main/tools 14:03:27 anssik: editing web specs is not an easy task, so I'm happy to announce Joshua Bell has contributed utilities to help editors and contributors in authoring the WebNN spec and reviewing spec changes 14:03:34 ... the tools are currently purpose-built for WebNN, but could be generalized into the future 14:03:42 ... thanks Josh! Quoting the readme: 14:03:57 ... - reformat-js.py applies clang-format to JavaScript blocks in the spec 14:03:57 ... - lint.mjs analyses the spec Bikeshed source and generated HTML to look for common errors like duplicate words or unclosed links, and helps enforce the coding conventions 14:04:01 Joshua_Lochner has joined #webmachinelearning 14:04:05 Do you want to briefly introduce the these tools to the group? Any caveat or areas where contributions would be particularly welcome? 14:04:36 Josh: I don't want to make the same mistake twice, so wrote these tools to help with that :) 14:05:05 Subtopic: Implementation status update 14:05:11 -> https://webmachinelearning.github.io/webnn-status/ 14:05:17 anssik: we have exciting updates to the implementation status, thanks everyone for your hard work! 14:05:26 ... - XNNPACK backend was replaced by TensorFlow Lite across Windows, ChromeOS, Android and Linux 14:05:38 ... (the earlier MLService data was merged into the newly added TFLite column) 14:05:45 ... - added Core ML implementation status 14:05:57 ... - updated ONNX Runtime Web support info for version 1.18.0 14:06:01 -> Implementation status diff https://github.com/webmachinelearning/webmachinelearning.github.io/pull/78 14:06:02 https://github.com/webmachinelearning/webmachinelearning.github.io/pull/78 -> MERGED Pull Request 78 Update implementation status (TFLite + Core ML) (by ibelem) 14:06:05 -> CoreML support status diff https://github.com/webmachinelearning/webmachinelearning.github.io/pull/80 14:06:05 https://github.com/webmachinelearning/webmachinelearning.github.io/pull/80 -> MERGED Pull Request 80 Add WebNN Core ML implementation status (by ibelem) 14:06:32 Subtopic: Awesome WebNN updates 14:06:41 anssik: I'd like to also share some awesome updates from our community members 14:06:52 RobKochman has joined #webmachinelearning 14:06:58 anssik: Onnx2Text converts an ONNX ML model protobuf from/to text, by Dwayne 14:07:03 -> Onnx2Text https://github.com/fdwr/Onnx2Text 14:07:25 anssik: many developers are familiar with Netron so Belem created a fork that integrates WebNN support status info into the tool 14:07:29 -> Netron with WebNN support status info https://ibelem.github.io/netron/ 14:07:39 anssik: I think this Netron enhancement could be perhaps upstreamed to Netron in the near future 14:08:05 anssik: also, Joshua Bell has been working on NNotepad, a browser-based playground for experimenting with WebNN expressions without boilerplate code 14:08:25 MikeApple has joined #webmachinelearning 14:08:27 Josh: this was a hackaton output 14:08:39 Present+ Mike_Wyrzykowski 14:09:11 -> NNotepad WebNN Playground https://webmachinelearning.github.io/webnn-samples/nnotepad/ 14:09:24 -> Other Awesome WebNN updates include WebNN samples for CPU, GPU and NPU https://github.com/webmachinelearning/awesome-webnn/pull/7 14:09:28 https://github.com/webmachinelearning/awesome-webnn/pull/7 -> MERGED Pull Request 7 Add NNotepad sample and some tools (by ibelem) 14:09:44 Topic: Hybrid AI exploration update 14:10:02 anssik: I've asked the Hybrid AI exploration team to present key findings informed by the caching prototype, security and privacy considerations, possible solutions for discussion 14:10:06 ... a WebML Community Group repo was recently created for these discussions 14:10:09 -> Hybrid AI Exploration GH repo https://github.com/webmachinelearning/hybrid-ai 14:10:21 anssik: I also want to acknowledge work by our Google participants who have explored the broader problem space around caching models in the browser 14:10:36 ... and recently published an article discussing exploring how to use Cache API, Origin Private File System API, IndexedDB API, File System Access API for caching large AI models, with a recommendation to use the Cache API 14:10:40 -> Cache AI models in the browser https://developer.chrome.com/docs/ai/cache-models 14:10:46 -> MediaPipe LLM demo https://mediapipe-llm.glitch.me/ 14:11:01 anssik: Mike, you have ~10 minutes, go! 14:11:05 Slideset: UrlToSlidesToBeAdded 14:11:13 [slide 1] 14:11:24 [slide 2] 14:11:30 zkis has joined #webmachinelearning 14:11:30 Mike: Outline 14:11:37 ... - Model size and download times 14:11:37 ... - Security and privacy considerations 14:11:37 ... - Caching requirements 14:11:37 ... - Possible solutions - no silver bullet! tradeoffs 14:11:53 RobKochman has joined #webmachinelearning 14:11:53 Present+ Rob_Kochman 14:11:58 present+ Zoltan_Kis 14:12:27 [slide 3] 14:12:32 Mike: Key points from offline discussion 14:12:47 ... - some models too large download during session 14:12:47 ... - focus on use cases over specific models 14:12:47 ... - adapters and variants are challenging, "baked in" into models, model variants with diff in quantization etc. 14:13:21 [slide 4] 14:13:26 Mike: Security and Privacy Considerations 14:13:48 ... - Current browsers implement per-origin local HTTP caches 14:13:48 ... - Cross-site privacy risk based on cache timing analysis 14:13:48 ... - Tolerable for non-AI web resources, images, script libraries 14:13:48 ... - BUT models are large and potentially shared 14:15:26 [slide 5] 14:15:33 Mike: Possible Mitigations 14:15:41 ... 1. Disallow use or WebNN in 3rd party context by default, this is done 14:15:53 ... 2. Generate keys based on actual model content to avoid data exfiltration, but possibly not tracking 14:16:10 ... 3. Limit number of built models and cache checks to avoid use of multiple model existence checks 14:16:34 [slide 6] 14:16:41 Mike: Caching Desired Properties: 14:16:47 ... - Reduce Latency 14:17:06 ... - Reduce Bandwidth 14:17:15 ... - Reduce Storage 14:18:12 ... cross-site, across implementations, model consolidation? 14:18:17 ... - Preserve Privacy 14:18:58 .. observation, hard to do 1 and 4 together, 2 and 3 easier 14:19:02 [slide 7] 14:19:07 Mike: Proposal Define New Model-Aware Caches 14:19:23 ... key ideas: 14:19:27 ... 1. Use "fake misses" (delays) to avoid redundant downloads 14:19:46 ... 2. Progress model load/timers only when requesting page is inactive 14:20:08 ... 3. Identify cache items by content-dependent hashes 14:20:22 ... 4. Use deduplication to avoid redundant storage 14:20:29 ... Some alternatives: 14:20:42 ... 1. Use existing APIs/caches perhaps with extensions 14:20:56 ... 2. Use the File System API + Background Fetch 14:21:42 [slide 8] 14:21:48 Mike: Prototype Status 14:21:53 ... Implemented node cache with hashes as keys, using external Redis service for storage 14:22:01 ... Model cache seems to be more generally useful 14:22:07 ... Next steps: 14:22:18 ... - Implement model cache 14:22:24 ... - Base on Service Worker Cache, Background Fetch if possible 14:22:30 ... - Three implementation options: 14:22:38 ... 1. Capture/replay graph building (shim + extension) 14:22:38 ... 2. Modify Chromium implementation (best for perf) 14:22:38 ... 3. Cache an existing model serialization and use model loader 14:23:24 anssik: thanks Mike, any quick questions? 14:23:30 ... comments and questions also welcome via the dedicated GH repo too: 14:23:37 -> https://github.com/webmachinelearning/hybrid-ai 14:23:50 Mike; thanks for raising these pain points. 14:24:06 Josh: we're in touch with Mike on this topic 14:24:46 Topic: NPU support 14:24:51 anssik: issue #523 and PR #696 14:24:51 https://github.com/webmachinelearning/webnn/pull/696 -> Pull Request 696 Add MLDeviceType npu (by fdwr) 14:24:52 https://github.com/webmachinelearning/webnn/pull/523 -> MERGED Pull Request 523 Wording change: Tighten up output shape calculation algorithms (by inexorabletash) 14:25:06 ... thanks for the reinvigorated interest and discussion for this topic everyone! 14:25:18 ... I'd like to try pull it all together: 14:25:40 ... We agreed to start with the simplest design option 1 deviceType: "npu" enum with system-decided fallback, this is what's we have in PR #696 14:25:45 ... Pros: 14:25:51 ... + Very simple API 14:25:55 ... + Least to test 14:26:11 ... + Affords backends the most control for fallback, since only the primary device preference is specified 14:26:15 ... Cons: 14:26:34 ... - App cannot specify the fallback preference, as the system instead decides any fallback devices 14:26:56 ... in our past discussions, this option 1 received the most support: 14:27:07 ... - Phillis commented "I actually like the simplicity of option 1. As long as we make it clear in the spec that the system may decide to fallback to other devices." 14:27:11 -> https://github.com/webmachinelearning/webnn/issues/623#issuecomment-2077837195 14:27:12 https://github.com/webmachinelearning/webnn/issues/623 -> Issue 623 WebNN should support NPU and QDQ operations (by wchao1115) [v2] [opset] [feature request] 14:27:39 anssik: - In our 2024-05-02 meeting the WG agreed to start with option 1 reserving the option to potentially expand if implementation experience shows need, thumbs up from Ningxin and MikeW 14:27:53 -> https://github.com/webmachinelearning/webnn/issues/623#issuecomment-2101095972 14:28:13 anssik: - on 2024-05-30 MikeW asked "Why is MLDeviceType even necessary and shouldn't it be a browser implementation decision to choose the most appropriate processor given the existing MLPowerPreference?" 14:28:18 -> https://github.com/webmachinelearning/webnn/issues/623#issuecomment-2140213646 14:28:38 anssik: - Zoltan responded "that could be possible even with the current API shape [...] if we can spec a way implementations could disregard user preferences/hints, telling if hints were overridden, or a fallback happened" 14:28:44 -> https://github.com/webmachinelearning/webnn/issues/623#issuecomment-2140258557 14:29:02 anssik: - JoshB shared past discussion (from Feb 2024) on the use cases for explicit device type in #302 14:29:03 https://github.com/webmachinelearning/webnn/issues/302 -> Issue 302 API simplification: context types, context options, createContext() (by zolkis) [v2] 14:29:08 -> https://github.com/webmachinelearning/webnn/issues/302#issuecomment-1960407195 14:29:08 https://github.com/webmachinelearning/webnn/issues/302 -> Issue 302 API simplification: context types, context options, createContext() (by zolkis) [v2] 14:29:16 -> https://github.com/webmachinelearning/webnn/issues/623#issuecomment-2140748141 14:29:31 anssik: at that time "coordinating ML and other workload across devices" was identified as a use case, and the proposal suggested back then by Josh was to add additional hints such as "accuracy vs performance" 14:30:05 ... later in that issue #302 Chai pointed out the "delegating it to the OS" approach sounds good "until you realize that the OS can make mistakes you cannot correct so be careful what you wish for" 14:30:42 ... Chai also noted his preference is a device type "npu" paired with a programmable fallback device (either "gpu" or "cpu"), a design that he says is "likely to produce a better result with higher efficiency at a cost of better predictability" versus a design where "any operator can fail with an unsupported failure" 14:30:46 -> https://github.com/webmachinelearning/webnn/issues/302#issuecomment-1975066617 14:31:38 ... on 2024-06-02 MikeW notes "WebGPU, which can be implemented purely in software without a physical GPU" and asks "why is WebNN specifying a physical device type and not leaving this up to the implementation?" and proposes "It is the browser implementation which ensures WebNN computations are consistent across any physical hardware device it runs on. In that scenario, MLDeviceType should be removed from the WebNN API." 14:31:42 -> https://github.com/webmachinelearning/webnn/issues/623#issuecomment-2148939286 14:31:53 ... Ningxin shared use cases that require specifying a device type: 14:32:20 ... - compute offloading: a game engine may want to run ML tasks on CPU to avoid interfering with the GPU time budget. See WebNN API in gaming scenarios discussion of WG 2022/04/07. 14:32:25 -> WebML WG 2022-04-07 telcon: WebNN API in gaming scenarios https://www.w3.org/2022/04/07-webmachinelearning-minutes.html#t03 14:33:02 ... - op fallback: a ML framework may want to create a CPU context with fallback to Wasm option, that would avoid expensive cross-device tensor data copy between WebNN graph inference and Wasm operators execution. Custom operations #6 and Support CPU - WebAssembly scenario of the op level execution use case #156 14:33:02 https://github.com/webmachinelearning/webnn/issues/6 -> Issue 6 Custom operations (by dsmilkov) [v2] 14:33:03 https://github.com/webmachinelearning/webnn/issues/156 -> CLOSED Issue 156 Support CPU - WebAssembly scenario of the op level execution use case (by huningxin) 14:33:06 -> https://github.com/webmachinelearning/webnn/issues/623#issuecomment-2149265770 14:33:07 https://github.com/webmachinelearning/webnn/issues/623 -> Issue 623 WebNN should support NPU and QDQ operations (by wchao1115) [v2] [opset] [feature request] 14:33:51 AramZS has joined #webmachinelearning 14:33:52 anssik: I see valuable use cases and I also see great discussion and questions. We've spend good time discussing the design. I have a reason to believe the group would benefit from wider feedback. We've learned an effective way to get such feedback is to allow users and developers explore and play with the API. 14:34:08 q+ 14:34:11 ... I'd propose we update the PR #696 with a prominent in-line issue block on top of the MLContextOptions spec section with a text that clarifies the status of this feature, proposal: 14:34:11 https://github.com/webmachinelearning/webnn/pull/696 -> Pull Request 696 Add MLDeviceType npu (by fdwr) 14:34:18 "ISSUE: MLContextOptions is under active development and the design is expected to change informed by further implementation experience and new use cases from the wider web community. The Working Group is considering additional API surface to allow definition of a fallback device, multiple devices in a preferred order, or an exclusion of a specific device. Other considerations under discussion include error handling, ultimate 14:34:18 fallback and quantized operators. Feedback is welcome on any of these design considerations from web developers, library authors, OS and hardware vendors, and other stakeholders via GitHub issue #523." 14:34:18 https://github.com/webmachinelearning/webnn/pull/523 -> MERGED Pull Request 523 Wording change: Tighten up output shape calculation algorithms (by inexorabletash) 14:35:11 -> MLContextOptions https://www.w3.org/TR/webnn/#dictdef-mlcontextoptions 14:35:32 s/issue #523/issue #623 14:35:52 anssik: I believe with this in-line issue added to clarify the status of this feature we can merge this PR to start gather valuable wider feedback from the API users for this feature 14:36:04 q+ 14:36:05 ... I expect the group to revisit the issue to review wider feedback and adjust the design accordingly to ensure what we will ultimately standardize on is a design that solves the right problems faced by the API users 14:36:11 ... any comments, suggestion? 14:36:13 q? 14:36:15 ack jsbell 14:36:33 jsbell: thanks for the summary Anssi, agree with your proposal for moving forward 14:37:02 ... current device type is create for current level of prototyping and testing to test we exercise all the backends, I really like your approach Anssi! 14:37:32 q? 14:37:34 ack zkis 14:39:31 zkis: In the issue I said with the current API we can satisfy Mike's ask 14:39:32 q? 14:39:53 q+ 14:40:11 Dwayne: we do need this design if only for testing purposes, I like the prose Anssi proposed will integrate 14:40:12 q? 14:40:15 ack MikeApple 14:41:14 MikeApple: we understand the use cases NPU devices want to understand and agree with Josh we can now experiment with it and found how it works, our concern is it maps with hardware that exists today, and the spec will live long 14:41:16 q? 14:42:51 q? 14:43:18 anssik: I hear consensus to merge PR #696 after adding the note to MLContextOptions section 14:43:19 https://github.com/webmachinelearning/webnn/pull/696 -> Pull Request 696 Add MLDeviceType npu (by fdwr) 14:43:22 q? 14:43:48 Topic: MLBuffer 14:43:57 Subtopic: Direct buffer sharing proposal 14:44:01 anssik: issue #688 14:44:01 https://github.com/webmachinelearning/webnn/issues/688 -> Issue 688 [MLBuffer] Support interop with WebGPU (by bbernhar) [webgpu interop] 14:44:11 ... Last call for comments for the direct buffer sharing proposal before we share it with the WebGPU group 14:44:17 ... we've received feedback from Rafeal (thank!), please feel free to share 14:44:21 -> https://github.com/webmachinelearning/webnn/issues/688#issuecomment-2161738846 14:44:21 https://github.com/webmachinelearning/webnn/issues/688 -> Issue 688 [MLBuffer] Support interop with WebGPU (by bbernhar) [webgpu interop] 14:44:52 Rafael: I agree with the overall thing proposed, implementation experience is key 14:45:23 ... we can work on the details with Bryan and then talk to WebGPU group 14:46:02 Bryan: all good 14:46:27 Rafeal: Microsoft and Apple people are the same for WebNN and WebGPU 14:46:43 Subtopic: MLGraphBuilder and MLBuffer construction alignment proposal 14:46:52 anssik: issue #697 14:46:53 https://github.com/webmachinelearning/webnn/issues/697 -> Issue 697 Inconsistency between MLGraphBuilder and MLBuffer construction (by reillyeon) [webgpu interop] 14:46:57 ... I'd also like to discuss MLGraphBuilder and MLBuffer construction alignment proposal from Reilly, issue description: 14:47:10 "An MLGraphBuilder and an MLBuffer are both associated with an MLContext however MLGraphBuilder is created with a normal constructor while MLBuffer is created by the createBuffer() method on MLContext." 14:47:16 ... proposal "We should consider removing createBuffer() and defining a normal constructor for MLBuffer with the same semantics as MLGraphBuilder (or add a createBuilder() method, but I'd prefer the former option)." 14:47:24 ... good feedback from Rafael and Zoltan 14:48:02 q+ 14:48:06 ack jsbell 14:48:51 jsbell: I think Rafael and Reilly are making a point if calling an async thing and passing appropriate context, implicit or explicitly to constructor and get back synch a thing 14:49:04 ... you use constructor 14:50:01 ... if async construction different questions, would like to hear Bryan's perspective on behind the scenes async action, sync API returning an object prefers constructor 14:50:01 q? 14:50:35 Bryan: related to initialization, in WebGPU we go to initialize buffer data, that presumably is not a sync operation, init could be async 14:50:50 ... we rely on implementation, not always a case, when is init so complex it is not async 14:51:11 ... more obvious to developers this is just normal GPU command, defining timelines is another question 14:51:26 ... train of thought, objects can be initialized in fairly complex ways 14:51:27 q? 14:52:15 jsbell: would like to hear more from Rafael 14:52:24 Rafael: this is not something I feel super strongly about 14:52:46 ... other Web APIs that have contexts, WebGL/GPU, Canvas, all have this "create a thing" flavour 14:53:05 ... WebGL mirrored Khronos spec, maybe not index too much on that particular API 14:53:24 ... design principles for the web say prefer constructors if returning an object right away 14:54:04 ... you can use a factory method with a promise e.g. when creating a bitmap 14:54:35 q+ 14:54:44 ... if the group decides factory is preferred fine with that, preference to do with constructor 14:54:45 q? 14:54:48 ack asully 14:55:04 asully: talked with Reilly, that seems inline with his thoughts 14:55:18 ... dispatch method e.g. happens on context timeline does not need to be on the context itself 14:55:37 q+ 14:55:46 ... if you fork an assumption anything with context is a method on the context that means even more in favour of constructor 14:56:18 ... slight preference to constructor, also follow up if moving other methods off the context 14:56:20 ack zkis 14:57:08 zkis: want to add to my comment that because of initialization that does not change the object, we can return an object right away and next time we interact with the object we can initialize 14:57:22 ... if there's async behaviour should return a promise 14:57:23 q? 14:58:17 Bryan: this is an expensive function to call, could be turned to async in the future 14:58:23 ... need to come up with timelines 14:59:05 ... may not stay sync for very long 14:59:26 q+ 14:59:30 ack asully 15:00:04 asully: I guess regarding async, I think synchronously returned buffer can be used as a handle when you call dispath, so from script perspective things look sync but happen on this deferred timeline 15:00:18 ... can you explain this in the issue perhaps? 15:00:21 q? 15:00:42 q? 15:03:56 RRSAgent, draft minutes 15:03:57 I have made the request to generate https://www.w3.org/2024/06/13-webmachinelearning-minutes.html anssik 15:04:17 q? 15:04:47 Joshua_Lochner has joined #webmachinelearning 15:04:47 RafaelCintron has joined #webmachinelearning 15:04:47 McCool has joined #webmachinelearning 15:04:47 dwayner8 has joined #webmachinelearning 15:04:47 asully has joined #webmachinelearning 15:04:47 ningxin has joined #webmachinelearning 15:04:47 geoff has joined #webmachinelearning 15:04:47 reillyg has joined #webmachinelearning 15:05:58 s/issue #523 and PR #696/issue #623 and PR #696 15:06:11 Joshua_Lochner has joined #webmachinelearning 15:06:11 RafaelCintron has joined #webmachinelearning 15:06:11 McCool has joined #webmachinelearning 15:06:11 dwayner8 has joined #webmachinelearning 15:06:11 asully has joined #webmachinelearning 15:06:11 ningxin has joined #webmachinelearning 15:06:11 geoff has joined #webmachinelearning 15:06:11 reillyg has joined #webmachinelearning 15:06:14 RRSAgent, draft minutes 15:06:15 I have made the request to generate https://www.w3.org/2024/06/13-webmachinelearning-minutes.html anssik 15:09:37 s/We've spend/We've spent 15:12:37 s/is create for/is required for 15:14:42 s/understand the use cases/understand the use cases for 15:15:28 s/want to understand and agree/and agree 15:15:31 RRSAgent, draft minutes 15:15:32 I have made the request to generate https://www.w3.org/2024/06/13-webmachinelearning-minutes.html anssik 15:31:04 s|UrlToSlidesToBeAdded|https://lists.w3.org/Archives/Public/www-archive/2024Jun/att-0000/WebML_WG_Hybrid_AI_Caching.pdf 15:31:05 RRSAgent, draft minutes 15:31:07 I have made the request to generate https://www.w3.org/2024/06/13-webmachinelearning-minutes.html anssik 15:36:01 s/into the future/in the future 15:43:04 s/on is a/on a 15:43:32 s/comments, suggestion/comments, suggestions 15:44:20 RRSAgent, draft minutes 15:44:21 I have made the request to generate https://www.w3.org/2024/06/13-webmachinelearning-minutes.html anssik 17:33:56 Zakim has left #webmachinelearning