14:42:06 RRSAgent has joined #webmachinelearning 14:42:10 logging to https://www.w3.org/2025/09/25-webmachinelearning-irc 14:42:10 RRSAgent, make logs Public 14:42:11 please title this meeting ("meeting: ..."), anssik 14:42:11 Meeting: WebML WG Teleconference – 25 September 2025 14:42:16 Chair: Anssi 14:42:20 Agenda: https://github.com/webmachinelearning/meetings/blob/main/telcons/2025-09-25-wg-agenda.md 14:43:05 Scribe: Anssi 14:43:10 scribeNick: anssik 14:43:27 Present+ Anssi_Kostiainen 14:43:35 RRSAgent, draft minutes 14:43:36 I have made the request to generate https://www.w3.org/2025/09/25-webmachinelearning-minutes.html anssik 14:58:12 DwayneR has joined #webmachinelearning 14:58:30 Present+ Dwayne_Robinson 14:59:36 Present+ Tarek_Ziade 15:00:13 Present+ Fabio_Bernardon 15:01:31 handellm has joined #webmachinelearning 15:02:01 Mike_Wyrzykowski has joined #webmachinelearning 15:02:25 Present+ Eugen_Thaci 15:02:51 Present+ Markus_Handell 15:03:06 ningxin has joined #webmachinelearning 15:03:13 Present+ Jason_Mayes 15:03:17 Present+ Reilly_Grant 15:03:36 Present+ Mike_Wyrzykowski 15:03:44 Present+ Jason_McGhee 15:03:53 Present+ Reilly_Grant 15:03:56 Joshua_Lochner has joined #webmachinelearning 15:04:05 Present+ Joshua_Lochner 15:04:13 anolan has joined #webmachinelearning 15:04:13 Present+ Ningxin_Hu 15:04:23 Present+ Rafael_Cintron 15:04:27 Present+ anolan 15:04:42 RRSAgent, draft minutes 15:04:43 I have made the request to generate https://www.w3.org/2025/09/25-webmachinelearning-minutes.html anssik 15:05:07 Present+ Andrew_Nolen 15:05:11 RRSAgent, draft minutes 15:05:13 I have made the request to generate https://www.w3.org/2025/09/25-webmachinelearning-minutes.html anssik 15:05:21 Anssi: we'll start by welcoming our latest new participant: 15:05:30 ... please welcome to the WebML WG, Jason Mayes from Google! 15:06:01 Anssi: Jason is Google's Web AI lead and a familiar face to many of us, looking forward to working with you in this group! 15:06:14 ... also warm welcome to Fabio Bernardon from NVIDIA to the WG! 15:07:01 Fabio: my background is on system software, want to make sure our solutions are available with this group's goals and provide end-users an ideal user experience 15:07:26 ... so that user's get the full benefit of their platforms 15:07:50 RafaelCintron has joined #webmachinelearning 15:08:21 Topic: Incubations 15:08:40 Fabio has joined #webmachinelearning 15:08:51 gb, this is webmachinelearning/charter 15:08:51 anssik, OK. 15:08:53 Anssi: a debrief on the recent WebML Community Group developments 15:08:57 -> WebML CG Teleconference – 18 September 2025 https://github.com/webmachinelearning/meetings/blob/main/telcons/2025-09-18-cg-agenda.md 15:09:14 Anssi: first, call for review of the WebML CG Charter received unanimous support, thank you for your support and contributions 15:09:23 ... the new Community Group charter is operational as of today, adds WebMCP as a new deliverable 15:09:28 -> Web Machine Learning Community Group Charter https://webmachinelearning.github.io/charter/ 15:09:40 Anssi: this means the work on WebMCP can advance from its explainer phase to a spec drafting phase when appropriate 15:10:07 ... I'm very pleased to see a diverse group of experts working on this proposal right from the start using experimental implementations to validate early design proposals 15:10:14 ... keep up the great work! 15:10:20 gb, this is webmachinelearning/webmcp 15:10:20 anssik, OK. 15:10:30 Anssi: second, we had a productive WebMCP API brainstorming session to discuss: 15:10:38 ... - use cases and we resolved that WebMCP focuses on human in the loop use cases initially 15:10:55 ... - core design principles, we resolved WebMCP will provide a layer of abstraction between the browser and MCP client aka "SDK option" 15:11:05 ... - Naming discussion, discussion still active in issue #24 15:11:06 https://github.com/webmachinelearning/webmcp/issues/24 -> Issue 24 Bikeshedding the global name (by bwalderman) 15:11:28 ... - API design, we resolved to continue use with the provideContext design and explore adding support for registerTool(options) and unregisterTool(name) to complement the API informed by implementation experience 15:11:46 ... - Declarative API, comments welcome via issue #22 and reviews for the explainer PR #26 15:11:46 https://github.com/webmachinelearning/webmcp/pull/26 -> Pull Request 26 add explainer for the declarative api (by MiguelsPizza) 15:11:46 https://github.com/webmachinelearning/webmcp/issues/22 -> Issue 22 Declarative API Equivalent (by EisenbergEffect) 15:11:56 ... questions comments? 15:12:23 Topic: New features and operator specific issues 15:12:28 gb, this is webmachinelearning/webnn 15:12:28 anssik, OK. 15:12:32 -> [operator specific] issues https://github.com/webmachinelearning/webnn/labels/operator%20specific 15:12:36 -> [feature request] issues https://github.com/webmachinelearning/webnn/labels/feature%20request 15:12:53 Subtopic: Support dynamic tensor resizing for slice and resample2d 15:13:00 Anssi: issue #885 15:13:01 https://github.com/webmachinelearning/webnn/issues/885 -> Issue 885 Support dynamic tensor resizing for slice and resample2d (by Honry) [feature request] [operator specific] 15:13:23 ... in the latest comment Wanming explains how he eliminated dynamic nodes in the decoder model from Segment Anything 15:13:33 ... I wanted to discuss generalizability of this solution proposed by Wangming 15:13:52 ... and understand whether a proper layer where to address this performance issue would be in frameworks, at the WebNN EP level at construction time? 15:14:15 ... Dwayne, Ningxin, you wanted to have a chat with Wanming to understand if the WebNN EP construction could be delayed similarly to DML EP so that WebNN EP could resolve patterns of cast/gather/unsqueeze at construction time? 15:14:26 Ehsan5 has joined #webmachinelearning 15:14:30 q? 15:14:32 jason has joined #webmachinelearning 15:14:35 Present+ Ehsan_Toreini 15:15:01 Dwayne: that approach works for that model and operator, a workaround, we don't need this issue specifically if we have the superset issue that's coming next 15:15:30 +1 15:15:44 Anssi: I'd propose we close this issue and redirect to #883 as a general solution 15:15:45 https://github.com/webmachinelearning/webnn/issues/883 -> Issue 883 Support flexible input sizes (by huningxin) [feature request] [operator specific] 15:16:08 Subtopic: Flexible input sizes 15:16:15 Anssi: issue #883 15:16:39 ... this proposed new feature is a significant change, so we wanted to gather feedback from key customers before proceeding 15:16:48 Eugen_Thaci has joined #webmachinelearning 15:16:48 ... we had identified ONNX Runtime and Transformers.js as WebNN API customers with this requirement 15:16:54 ... at our last meeting we received "a massive +1" from Joshua of Transformers.js 15:17:18 ... Reilly noted this must be working on ORT WebGPU EP now, and Phillis confirmed Core ML supports dynamic shapes on CPU 15:17:31 Mike: correct 15:17:55 ... I think the remaining open was to check with Guenther for ORT Web and WebNN EP perspective 15:17:58 zkis has joined #webmachinelearning 15:18:01 ... do we have new information, or should we defer this for later? 15:18:07 Present+ Zoltan_Kis 15:18:22 Dwayne: no new information from Guenther yet 15:19:51 Reilly: this has been on my todo list , even the question that Core ML only support this on CPU is relevant, want to use what semantics frameworks use 15:20:01 q? 15:20:56 q? 15:21:02 q+ to ask 15:21:05 ack anssik 15:21:05 anssik, you wanted to ask 15:21:26 Subtopic: Core operator set 15:21:38 Anssi: issue #573 15:21:39 https://github.com/webmachinelearning/webnn/issues/573 -> Issue 573 Core operator set (by philloooo) [question] [opset] 15:22:27 ... I wanted us to revisit our core operator set effort that aims to identify current primitive gaps by mapping compositional fundamentals to WebNN operators 15:22:43 ... the current WebNN op set contains ~92 operators and is informed by requirements of popular models 15:23:12 ... both low-level and certain high-level fused operators are included, and all high-level ops can be decomposed into low-level ops 15:23:41 ... in non-normative blocks in the spec we include code that demonstrates implementability of these decompositions 15:23:56 ... a few examples of high-level ops whose decompositions are more elaborate include e.g. 15:24:05 -> lstm() https://www.w3.org/TR/webnn/#api-mlgraphbuilder-lstm 15:24:18 -> lstmCell() https://www.w3.org/TR/webnn/#api-mlgraphbuilder-lstmcell 15:24:29 -> gru() https://www.w3.org/TR/webnn/#api-mlgraphbuilder-gru 15:24:38 Anssi: if you scroll to the bottom of these sections and search for "can be generically emulated" you'll find the emulation code 15:24:58 ... I linked to the publicly shared speadsheet Dwayne put together some time ago: 15:25:02 -> Machine Learning Operator Mapping https://onedrive.live.com/edit?id=EE82F5C6F06C7371!345450&resid=EE82F5C6F06C7371!345450&ithint=file%2Cxlsx&authkey=!AK8f-RDTleqlLXE&wdo=2&cid=ee82f5c6f06c7371 15:25:16 Anssi: there's 3 tabs: Operators Rearranged, All Raw Operators, Linalg comparison 15:25:28 ... if we look at the "All Raw Operators" tab, it includes the following compared against all WebNN operators: 15:25:38 -> ONNX Operators https://onnx.ai/onnx/operators/ 15:25:43 -> TOSA https://mlir.llvm.org/docs/Dialects/TOSA/ 15:25:47 -> MLIR Linalg https://mlir.llvm.org/docs/Dialects/Linalg/ 15:25:51 -> StableHLO https://github.com/openxla/stablehlo/blob/main/docs/spec.md 15:25:56 -> PyTorch prims https://docs.pytorch.org/docs/stable/torch.compiler_ir.html#prims-ir 15:26:01 -> Apple MIL https://apple.github.io/coremltools/source/coremltools.converters.mil.mil.ops.defs.html 15:26:06 -> Tencent NCNN https://github.com/Tencent/ncnn/blob/master/docs/developer-guide/operators.md 15:26:10 -> ANN https://developer.android.com/ndk/reference/group/neural-networks 15:26:14 -> TFJS https://js.tensorflow.org/api/latest/ 15:26:18 -> TFLite https://www.tensorflow.org/mlir/tfl_ops 15:26:23 -> DirectML https://learn.microsoft.com/en-us/windows/win32/api/directml/ne-directml-dml_operator_type 15:26:28 Anssi: this is a lot of data, thanks Dwayne for sharing this publicly 15:27:00 ... the high-level aim of this exercise is to make sure we have the right compositional fundamentals, aka low-level ops 15:27:15 ... that allow us to decompose all high-level ops included into these low-level ops 15:27:38 ... our current thinking is some of these low-level ops may be only useful for composition, but not as is, and this is OK 15:27:51 ... we have added the following new primitives based on this research: 15:27:57 - rounding operators (issue #817, PR #859) 15:27:57 https://github.com/webmachinelearning/webnn/issues/817 -> CLOSED Issue 817 Rounding operators (by fdwr) [feature request] [interop] 15:27:58 https://github.com/webmachinelearning/webnn/pull/859 -> MERGED Pull Request 859 Add `roundEven` operator (by fdwr) 15:28:04 - isNaN and isInfinite (PR #858) 15:28:04 https://github.com/webmachinelearning/webnn/pull/858 -> MERGED Pull Request 858 Add `isNaN` and `isInfinite` operators (by fdwr) 15:28:13 ... blocked by lack of platform support is: 15:28:31 - bitwise ops (and, or, xor, left shift, right shift, not), Core ML does not currently support, proposed as an optional extension 15:28:40 ... to be researched for feasibility: 15:28:46 ... - modulus/remainder, flooring divide 15:28:53 ... - sumPool/minPool 15:28:56 ... - random number generation 15:29:00 ... - expand to support multiples of block sizes 15:29:04 ... - relax dimension limitations (e.g. support conv1d and conv3d too, not just conv2d) 15:30:07 Dwayne: I don't have notes written beyond the latest comment, would be informative to have input from people who worked on TOSA and PyTorch prims and others 15:30:26 ... if no additional issues, I'll tackle the todo items 15:31:18 q+ 15:31:22 ack reillyg 15:31:46 Reilly: wanted to say some of the pressure on this issue has been taken off by opSupportLimits work, we can express what ops are supported in which implementation 15:32:08 ... one aspect of this problem, we want to make sure people using WebNN know what it can support on the given system 15:32:52 ... the next bit that's valuable is both checking which ops we could add to WebNN because they're supported and expand the scope of ops, and understand ops that frameworks expect to exists and make sure those are expressible in WebNN 15:33:15 ... this is where feedback from Joshua of HF is very helpful 15:33:20 Dwayne: agreed 15:33:21 q? 15:34:35 Reilly: would like to pull in NVIDIA folks, a valuable contributions would be to understand places where the underlying HW has fast paths, places where we might have compositions, but what exists in current gen HW may need something different 15:35:25 ... many decomposition can be refused, but requires a compiler that does not exists yet, ML model compiler state of the art is immature cf. other compilers 15:35:38 ... so we need to balance against that reality 15:35:38 q? 15:35:40 q+ 15:35:44 ack ningxin 15:36:53 Ningxin: I'd second Reilly, from the particular model perspective, when we look at SLMs Phi mini, tinyLLaMa and ONNX models running through WebNN EP, very specialized high-level operators e.g. various attentions, GQA, MatMulNBits 15:37:51 ... we could do some research on those language models, because they are highly optimized, to see how to decompose those ops to primitives, and identify any op gaps, and investigate performance impact 15:38:10 Ningxin: Your WebNN function idea from TPAC (composed minigraph representation with a decomposition) is useful here. 15:38:12 ... what is the performance impact of decomposed form, fusing ops back 15:38:13 q? 15:38:57 Ningxin: we have been doing some investigation for those language models, we can gather some data and share in the issue 15:38:58 q? 15:39:29 q? 15:40:08 Fabio: I will get back to the group after talking with the NVIDIA team 15:40:30 q? 15:40:39 Topic: WebNN-WebGPU interop 15:41:05 Anssi: I wanted us to refresh the "webgpu interop" triaged spec issues in the light of new implementation experience from ML-GPU interop work in Chromium 15:41:09 -> "webgpu interop" issues https://github.com/webmachinelearning/webnn/labels/webgpu%20interop 15:41:43 Anssi: I believe some participants are interested in making progress in this space toward the end of the year, so want to understand which issues to bump in priority 15:41:52 q+ 15:41:56 ... a lot of this is under-the-hood implementation work to get backends interop in a performant manner 15:42:00 ... any WebNN API shape implications or possible changes suggested, or is the current MLTensor abstraction working well? 15:42:05 ... there's some POC work to enable WebGPU-WebNN interop with ORT on UMA devices, interested in any learning from this 15:42:09 -> https://chromium-review.googlesource.com/c/chromium/src/+/6962138 15:42:15 q? 15:42:18 ack reillyg 15:43:07 Reilly: the prototyping work in Chromium seems to be going well, we have a working implementation of export on DirectML and Core ML, would like to add this to TFLite backends, work in progress 15:43:21 ... we seem to be successful in implementing MLTensor as proposed 15:44:29 ... the other piece is, chatting with Phillis, we designed the export method to be async based on early exploration in implementation feasibility and we think it might be possible to make it sync, and that might be the next change to happen to the current proposal, if we succeed in that implementation exercise 15:44:30 q? 15:44:51 Anssi: sync benefit? 15:45:19 Reilly: performance, when moving from CPU to GPU must schedule a new GPU task, supporting sync would allow full pipeline to get sent in a one go 15:45:20 q? 15:45:42 q? 15:45:44 q? 15:46:49 Ningxin: implementation experience from context creation would be beneficial, creating from WebGPU device vs. from NPU device 15:47:28 q+ 15:47:37 ... in this context to interop with WebGPU, there's interest in application developers to allow NPU context interop with WebGPU 15:47:45 ack RafaelCintron 15:48:15 Rafael: the answer to Ningxin's questions, enabling people to use inference on NPU, the browser needs to take care of copying the data between the two adapters 15:48:35 q+ 15:48:54 Ningxin: thanks Rafael, my questions is the developer needs to know if this path is efficient, is it zero copy or not? 15:49:09 Rafael: we have to experiment and look at the results 15:49:11 ack reillyg 15:49:49 Reilly: looking at frameworks, initially DirectML was the only that tied context to GPU and other devices, other frameworks worked on higher level to span across CPU, GPU, NPU 15:50:14 ... as Rafael said, implementation figures where to keep the buffer based on the given hint 15:50:30 ... also adjust that based on real-world usage, buffer could be moved around 15:50:40 q? 15:51:21 Anssi: it would be interesting to see an explainer for various approaches to UMA 15:51:21 q? 15:51:40 Topic: Privacy considerations 15:51:47 Anssi: issue #886 and PR #890 15:51:48 https://github.com/webmachinelearning/webnn/pull/890 -> Pull Request 890 Revise privacy considerations (by anssiko) 15:51:48 https://github.com/webmachinelearning/webnn/issues/886 -> Issue 886 Revise privacy considerations (by anssiko) [privacy-tracker] 15:51:53 ... I submitted a PR with a first stab at revising the privacy considerations section in response to privacy review we received 15:52:02 ... the suggested changes are the following: 15:52:06 ... - Add a new introduction section 15:52:13 ... - Add Fingerprinting subsection, revise and expand content: note design principles wrt operators, MLContextOptions, opSupportLimits() 15:52:18 ... - Add Execution Time Analysis subsection 15:52:35 ... - Add WebGPU Comparison 15:52:51 ... I requested initial review from Reilly because he had the context but welcome review from others as well 15:53:19 ... as you see, fingerprinting is the key consideration so it got its own subsection to highlight the mitigations 15:53:56 ... also Execution Time Analysis, the known privacy issue inherent to any compute API, got its own section and welcomes further input from experts 15:54:04 ... lastly, we carve out discussion on WebGPU comparison into its own subsection 15:54:18 q? 15:54:41 Reilly: didn't have a chance to look yet 15:54:48 Topic: MLGraph Cache 15:54:54 -> Explainer https://github.com/webmachinelearning/webnn/blob/main/cache-explainer.md 15:54:59 Anssi: this topic included to check any feedback from Apple, MikeW? 15:55:32 Mike: looked at this briefly, not completely clear why this does not happen implicitly 15:55:39 ... or is this a detail I missed 15:55:40 q? 15:55:42 q+ to explain. 15:55:48 ack reillyg 15:55:48 reillyg, you wanted to explain. 15:56:02 Reilly: I swear this was in one of the explainers, Zoltan was in the implicit caching camp 15:56:46 ... the example here is an API that does implicit caching, Wasm and WebGPU do that, not specified per se, but that's how Wasm does it when a module is loaded, we stick the compiled version in the same bucket so if it is compiled we reuse that instead 15:57:08 ... in WebGPU folks reuse compiled shaders 15:57:55 ... the difference to WebNN is the size of input, in Wasm the module size and WebGPU shaders, these are small, Wasm 10-20 MB, whereas ML models, some small, but can be 100s to +1GB 15:58:09 ... the GPU shader model does not work 15:58:24 ... that doesn't work here very well, would be inefficient 15:58:49 ... with a model loader approach we could possibly follow the Wasm approach, but that does not work with a model builder design 15:59:13 ... caching is asking the developer "give me a name for the model", and the model is keyed on that 15:59:14 q? 15:59:17 Related: https://github.com/webmachinelearning/webnn/issues/807#issuecomment-2608135598 15:59:18 https://github.com/webmachinelearning/webnn/issues/807 -> Issue 807 Caching mechanism for MLGraph (by anssiko) [question] [feature request] 15:59:48 Also related: https://github.com/webmachinelearning/webnn/blob/main/cache-explainer.md#explicit-vs-implicit-api 15:59:50 Mike: I see that as per-origin, no privacy concerns with sharing models 15:59:51 q? 16:00:25 Mike: everything looks good to me, thanks 16:00:46 Topic: Query supported devices 16:00:51 Anssi: this problem space is split in two: before and after graph compilation 16:00:55 ... and we need to decide whether we want to proceed with "before" case, "after" case, or with both 16:01:00 Subtopic: Before graph compilation 16:01:07 Anssi: issue #815 and PR #884 16:01:11 https://github.com/webmachinelearning/webnn/pull/884 -> Pull Request 884 Update explainer with new proposal for simple accelerator mapping (by zolkis) 16:01:11 https://github.com/webmachinelearning/webnn/issues/815 -> Issue 815 Query supported devices before graph compilation (by anssiko) [device selection] 16:01:12 Anssi: the PR was updated by Zoltan (thanks again!) based on feedback from the previous call to a simplied boolean-returning context.accelerated API 16:01:19 -> Summary of changes https://github.com/webmachinelearning/webnn/issues/815#issuecomment-3329979085 16:01:20 https://github.com/webmachinelearning/webnn/issues/815 -> Issue 815 Query supported devices before graph compilation (by anssiko) [device selection] 16:01:23 Anssi: to make the latest proposal easy to review and comment on, here's the proposed IDL change from the explainer: 16:01:29 ``` 16:01:29 partial dictionary MLContextOptions { 16:01:31 boolean accelerated = true; 16:01:31 }; 16:01:31 16:01:31 partial interface MLContext { 16:01:31 boolean cpuFallbackActive; 16:01:31 }; 16:01:31 ``` 16:02:21 Zoltan: thanks the minimal proposal, only have this property, because context option only makes sense if the UA cannot have acceleration to satisfy the use case raised by Reilly in the issue 16:02:38 ... if we don't want to deal with the error, the UA will always try to accelerate and will always default to true 16:03:03 ... only difference is how we want to handle explicit error at context creation if the platform cannot support acceleration 16:03:21 ... another consideration, do we want to expose this as an event or want an property 16:03:40 ... also opSupportLimits by device is another possible design, sharing with the post-graph compilation 16:03:42 q? 16:04:45 Rafael: I think boiling it down to booleans seems fine, would like to understand event and property on context, is this due to context getting lost? 16:05:14 Zoltan: for app to know when there's CPU fallback, wanted to figure out how to spec events, property would be easier to integrate into existing algorithms 16:05:38 Rafael: that'd be aligned with WebGPU/GL, would like to hear from Markus if this satisfies his requirements 16:06:03 lol 16:06:12 i hear it 16:06:42 Markus: couple of use cases 16:06:52 [ use cases x 1000 ] 16:07:37 Markus: power preferences plus this sounds OK, for large models that need to have acceleration 16:07:50 ... if you want low latency, maybe the framework only support CPU backend 16:07:56 Zoltan: NPU could be an option 16:08:13 ... could add "low-latency" to context options 16:08:22 ... you only check if CPU fallback is active? 16:09:02 Markus: for audio processing you have latency going back and forth, wait for completion, in those cases we still want to use the benefit of WebNN CPU acceleration, define that as accelerated = false 16:09:26 Zoltan: you'd want context creation with acceleration = false, do you want an error in that case so the UA would know that's not possible? 16:09:44 Markus: getting access to MLContext and reading that would be fast, error not needed in this case 16:09:48 Zoltan: thanks this would be simple 16:10:15 Markus: if the system is running the model and falls back, that'd be interesting to know, if we have a property then we'd need to poll that property 16:10:39 ... we can poll the property on every frame 16:10:40 q? 16:11:06 Zoltan: I will move forward finishing this PR and put up a spec PR for comments 16:11:07 q? 16:11:22 Topic: Open PRs 16:11:26 Anssi: PR #857 16:11:27 https://github.com/webmachinelearning/webnn/pull/857 -> Pull Request 857 Support rankRange for op output tensors in opSupportLimits (by huningxin) 16:11:34 ... this PR adds rankRange for graph input, constant and output in opSupportLimits() 16:11:42 ... we have two approvals, editors, please feel free to merge at your convenience -- thanks for your work on this! 16:11:50 Anssi: the long-standing PR #770 was also merged this week to bake in the rename pool2d roundingType -> outputShapeRounding 16:11:50 https://github.com/webmachinelearning/webnn/pull/770 -> MERGED Pull Request 770 Rename pool2d MLRoundingType - Simplify the operand layout support of conv2d and pooling 2d operations (by fdwr) 16:11:53 ... thank you again editors and reviewers and all contributors 16:11:58 ... great to see the PR queue get shorter 16:12:16 q? 16:12:50 RRSAgent, draft minutes 16:12:52 I have made the request to generate https://www.w3.org/2025/09/25-webmachinelearning-minutes.html anssik 16:13:24 Present- Jason_Mayes 16:13:25 RRSAgent, draft minutes 16:13:26 I have made the request to generate https://www.w3.org/2025/09/25-webmachinelearning-minutes.html anssik 16:14:05 s/available with/aligned with 16:14:52 s/so that user's/and ensure that users 16:15:52 s/use with the/use the 16:17:27 s/I think the remaining/Anssi: I think the remaining 16:17:49 s/list ,/list, 16:20:23 s/many decomposition/many decompositions 16:21:30 s/tinyLLaMa/TinyLlama 16:24:18 s/is the developer/is if the developer 16:25:51 s/this topic included/this topic was included 16:27:40 s/100s/from 100 MB 16:29:29 s/thanks the/that's the 16:30:02 s/an property/a property 16:31:34 s/only support CPU/only supports CPU 16:32:12 s/accelerated =/accelerated == 16:32:45 s/acceleration =/accelerated == 16:33:56 RRSAgent, draft minutes 16:33:58 I have made the request to generate https://www.w3.org/2025/09/25-webmachinelearning-minutes.html anssik 18:32:19 Zakim has left #webmachinelearning