04:26:06 RRSAgent has joined #webmachinelearning 04:26:06 logging to https://www.w3.org/2019/09/17-webmachinelearning-irc 04:26:11 Zakim has joined #webmachinelearning 04:26:17 RRSAgent, make logs public 04:26:28 Meeting: WebML CG F2F Day 1 – 17 September 2019 04:26:32 Chair: Anssi 04:26:36 Agenda: https://github.com/webmachinelearning/meetings/blob/master/2019-09-17-fukuoka/ 04:26:45 niwamoto has joined #webmachinelearning 04:26:48 Scribe: Anssi 04:26:50 scribeNick: anssik 04:27:10 Present+ Anssi_Kostiainen 04:27:26 Present+ Ningxin_Hu 04:27:37 takio has joined #webmachinelearning 04:27:43 Present+ Narifumi_Iwamoto 04:27:55 Present+ Youngsun_Ryu 04:27:58 Present+ David Marsh 04:28:10 riju has joined #webmachinelearning 04:28:20 Present+ Phil_Laszkowicz 04:28:24 present+ Kangchan_Lee 04:28:25 Present+ Rijubrata Bhaumik 04:28:26 Present+ Takio_Yamaoka 04:32:15 RRSAgent, draft minutes v2 04:32:15 I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik 04:33:04 TOPIC: Welcome and intros 04:33:24 anssik: welcome to the WebML CG's 2nd F2F, happy to see both new and old faces around 04:33:58 Bruce has joined #webmachinelearning 04:34:37 jdarpinian has joined #webmachinelearning 04:34:37 ... on the agenda today on Day 1: intros, custom operations, MLIR (Multi-Level Intermediate Representation) exploration, Operation set 04:34:48 ... on Friday Day 2 exploratory topics, standards track next steps, W3C workshop planning 04:35:00 anssik: Let's do a roundtable 30-sec intros: your affiliation & interests toward the group 04:35:37 dino has joined #webmachinelearning 04:35:43 present+ 04:35:45 anssik: I'm the chair working for Intel 04:36:06 nikhil: working for Google, Deeplearn.js co-author, want to bring the ecosystem forward, not familiar with W3C 04:36:23 ningxin_hu: Intel, CV and ML interest, OpenCV.js background 04:36:42 kenneth: Intel architect, W3C TAG rep, overseeing the architecture of the Web 04:37:02 jdarpinian has joined #webmachinelearning 04:37:22 Cortiz has joined #webmachinelearning 04:37:33 Yongsun: Samsung, interested in ML in general 04:37:47 s/replacethis/withthis/ 04:38:13 whsieh has joined #webmachinelearning 04:38:13 kimwooglae_ has joined #webmachinelearning 04:38:13 Dave: Payments network with many members, just interested in ML 04:38:43 Chimiming: University affiliated 04:38:48 Chunming has joined #webmachinelearning 04:39:31 Dean: Apple, interested in everything the group does, not ML specialist but I'll do my best connecting Apple experts, I work on WebKit project also Safari 04:40:11 Philip: Omnijar, working with DL for 13 years, with large companies, automotive, NVIDIA, ARM, interest to continue move commercial project to the Web 04:40:44 Riju: Intel, Chromium developer, sensors, NFC, media capture, OpenCV, not using ML currently 04:41:15 Kangchan: ETRI Korea, working on standards in ITU, ML as a Service 04:42:04 Wenson: Apple, WebKit, interest in ML 04:42:26 Diogo: Brazil W3C office, NLP background and interest 04:42:38 Bruce has joined #webmachinelearning 04:42:58 Takio: Yahoo Japan, sensor processing, transcoding, interest in CV w/ ML 04:43:34 Sangwhan: TAG member, used to work for Opera, CV startup not affiliated with Web, I also do NLP 04:43:53 Frank: Inria France, curious of the group 04:43:59 tung has joined #webmachinelearning 04:44:40 Belem: Intel, responsible to WebML polyfill 04:45:16 James: Google, working on Chrome, WebGL/GPU, interested in ML in Chrome 04:46:08 TOPIC: Custom operations 04:49:13 hyojin has joined #webmachinelearning 04:49:15 https://docs.google.com/presentation/d/1KGRc1RnnYt_1JK2Pk6r2xRkD60v4F8jc4beHMv0crng/edit#slide=id.p 04:50:44 q+ to say something 04:50:53 ack anssik 04:50:53 anssik, you wanted to say something 04:51:17 [ningxin presents the slides] 04:53:17 ningxin_hu: ML field is fast moving. Model architecture and the ops are evolving quickly. This leads JS ML frameworks usually have big op set (e.g. TF.js has over 200 ops) 04:53:25 ... Today’s framework’s ops are implemented in WebGL and WASM, and WebGPU 04:53:32 ... WebNN’s built-in op set that focuses on hardware acceleration will be small and grow slowly 04:53:52 ... Problem: It demands a way for library authors to write ops that can interop with built-in ops. 04:54:08 Options: WebNN built-in ops interop with framework ops in WASM and WebGL/WebGPU (focus of this investigation) 04:54:49 Kenneth: can you mix Wasm and WebNN ops? 04:55:14 Shangwan: there's a GPU-CPU transfer with a performance cost 04:55:38 ... WebNN provides a way to write custom op by a domain specific language (e.g. Kai’s proposal) (future exploration) 04:55:57 ningxin_hu: next subtopic, WebNN-WebGPU Interop 04:56:18 [showing example code of Conv + Add + Relu by TF.js WebGPU] 04:56:42 RRSAgent, draft minutes v2 04:56:42 I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik 04:57:41 [showing example of compile WebNN op for WebGPU device] 04:59:48 [scribe sees ~30 participants, not all names recorded in minutes] 05:00:10 HelloFillip has joined #webmachinelearning 05:00:25 [showing example of execute WebNN's op with WebGPU op] 05:00:29 junwei has joined #webmachinelearning 05:02:16 -> https://docs.google.com/presentation/d/1KGRc1RnnYt_1JK2Pk6r2xRkD60v4F8jc4beHMv0crng/ WebNN Interop Investigation slides 05:03:16 junwei has joined #webmachinelearning 05:03:31 [ningxin showing a demo on his laptop] 05:03:52 ningxin_hu: custom build of Chromium on macOS 05:04:03 lisha has joined #webmachinelearning 05:05:08 kimwooglae has joined #webmachinelearning 05:05:20 https://docs.google.com/presentation/d/1KGRc1RnnYt_1JK2Pk6r2xRkD60v4F8jc4beHMv0crng/edit?usp=sharing 05:05:40 Franck has joined #webmachinelearning 05:05:44 conv input dims: [1,100,100,100] and filter dims: [3,3,100,100] WebGPU conv2d/add/relu elapsed time: 60.81 ms WebNN conv2d interops with WebGPU add/relu via ArrayBuffer elapsed time: 39.67 ms WebNN conv2d interops with WebGPU add/relu via WebGPUBuffer elapsed time: 22.11 ms WebNN conv2d with fused add/relu elapsed time: 21.11 ms 05:06:32 [above pasted text is an output of test case of TF.js sets backend as WebGPU] 05:07:50 sangwhan: is the Chromium source available? 05:07:57 ningxin_hu: that's available 05:08:58 nikhil: how fast is the readback? 05:09:08 ningxin_hu: not yet tested that 05:09:24 dino: you can't use MPS, why is that? 05:09:40 ningxin_hu: different memory layout internally 05:10:15 dino: can you show conv operations, what they are doing? 05:10:24 yoshiaki_ has joined #webmachinelearning 05:10:33 ... I was expected to see a custom op, i.e. shader code 05:10:45 ningxin_hu: shader code is inside TF.js 05:11:47 ningxin_hu: subtopic, POC Implementation on MPS 05:11:57 ... Reuse the same MTLDevice associated with WebGPUDevice. 05:12:07 ... Get the MTLBuffer associated with input and output WebGPUBuffer. 05:12:14 ... Allocate MPSImage for inputs with MTLDevice. 05:12:21 ... Create MTLCommandBuffer from MTLQueue associated with WebGPUDevice. 05:12:28 ... Encode a compute shader that copies and reorders data from MTLBuffer to MPSImage (MPSImage layout). 05:12:48 dino: is this a custom WebGPU implementation? Where you decide you MPS? 05:13:15 ... TF.js running on top of WebGPU 05:13:25 ... this is an impl of WebNN, not TF on for of Chromium 05:13:44 ... using WebGPU infra underneath it has platform implementation e.g. MPS 05:13:55 ningxin_hu: Encode MPSNNGraph/MPSCNNKernel to MTLCommandBuffer 05:14:02 ... Encode a compute shader that copies and reorders data from output MPSImage to output MTLBuffer. 05:14:10 ... Commit MTLCommandBuffer. 05:14:37 ningxin_hu: Performance Summary 05:15:24 ... Inference time (ms) 05:16:00 ... WebGPU conv/add/relu 61.31 05:16:14 ... WebNN conv interops with WebGPU add/relu via ArrayBuffer 43.42 05:16:28 ... WebNN conv interops with WebGPU add/relu via WebGPUBuffer 23.06 05:16:34 ... WebNN conv with fused add/relu 21.25 05:16:52 ningxin_hu: Copying/Reordering Optimization 05:17:01 ... Inference time (ms) 05:17:16 WebGPU conv x2 112.96 05:17:22 WebNN conv + WebGPU conv 67.33 05:17:38 ... WebNN conv x2 with reordering 24.53 05:17:54 s/ WebGPU conv x2 112.96/... WebGPU conv x2 112.96/ 05:18:00 yoshiaki has joined #webmachinelearning 05:18:03 s/WebNN conv + WebGPU conv 67.33/... WebNN conv + WebGPU conv 67.33/ 05:18:16 yoshiaki_ has joined #webmachinelearning 05:18:45 sangwhan: with this design, vendors targeting a single type of accelerator, what are the implications? 05:19:12 ... if you were to implement this in a general browser, not OS bound, you'd have multiple accelerators, what's the story? 05:19:41 ... you'd need to have compilers for every accelerator 05:19:51 ... implementability question 05:20:20 ... if you'd use the platform APIs, it'd be fine, but they can be limited in terms of support 05:20:48 dino: Apple's perspective is we want to offload to the hardware as much as possible 05:21:26 sangwhan: when testing the POC, did the inference affect the ref(?) 05:21:47 dino: same issue with WebGL/GPU 05:22:20 ... issue if the background task freezes the computer 05:22:44 ... battery and perf benefit for going to ML hardware 05:23:01 sangwhan: would be nice if everyone had these purpose-built accelerators 05:23:11 ... curious of implications of that 05:23:24 dino: not sure what Android devices have AI accelerators 05:23:48 sangwhan: based on testing, could be NEON accelerated, or GPU, whatever the vendor had time to do 05:24:20 nikhil: also good to benchmark readback times from those accelerators 05:24:22 yoshiaki has joined #webmachinelearning 05:25:17 [skipping slides to Summary of WebNN-WASM interop slide] 05:25:28 ningxin_hu: WebNN ops allow to access vendor specific CPU acceleration 05:25:36 ... Interop between WASM ops and WebNN op has overhead 05:25:42 ... Memory copying between WASM heap and WebNN backend 05:25:49 ... Memory reordering, e.g. MKL-DNN blocked layout 05:25:58 ... Execute WebNN ops chain with opaque operands can avoid unnecessary overhead 05:26:24 ningxin_hu: Proposal 05:26:43 ... Support key ops that access hardware acceleration (#17) E.g. conv2d and matmul 05:26:57 ... Support compiling and executing ops for devices (new issue?) CPU or GPU 05:27:09 ... Support interop with WebAssembly and WebGPU compute shader 05:27:18 ... Sharing ArrayBuffer with WASM op 05:27:27 ... Sharing WebGPUBuffer with WebGPU op (new issue?) 05:28:00 ... Support interop with WebAssembly and WebGPU compute shader 05:28:07 ... - Sharing ArrayBuffer with WASM op 05:28:17 ... - Sharing WebGPUBuffer with WebGPU op (new issue?) 05:28:25 ... Support executing ops chain with opaque operands (#11) 05:28:33 ... - Leverage device optimized memory layout and avoid unnecessary memory reordering 05:28:41 ... Explore custom op support by DSL (new issue?) 05:30:03 dino: how do these numbers compare with true native frameworks, CoreML, TensorFlow native? 05:31:29 ningxin_hu: 10% WebNN overhead over native 05:31:48 nikhil: TensorFlow/WebGL vs. CUDA, CUDA 10x faster 05:32:06 ???: what kind of model do you use? 05:32:46 ningxin_hu: we have multiple models for this experiment, we use conv kernels, MobileNet, Inception, ResNet50 05:33:12 ... on our website we have bigger models, the model size constraints us 05:34:09 nikhil: CPU and non-CPU accelerators an issue, how to consider them in the context of custom ops, understand readbacks 05:34:40 ???: what is the focus in terms of hardware targets of this group? 05:34:58 ningxin_hu: we have experience on Android phone with an AI accelerator, close to native perf 05:35:18 yoshiaki has joined #webmachinelearning 05:35:59 ???: what is the scope of this work? Recommendation to define a higher level abstraction to be flexible 05:36:25 [hearing no concerns for the proposed tasks to investigate further] 05:36:55 ningxin_hu: I'm willing to take "Support compiling and executing ops for devices (new issue?)" task 05:37:28 ... maybe Kai could help with "Explore custom op support by DSL (new issue?)" 05:38:34 dino: Apple could look at "Support key ops that access hardware acceleration (#17)" and provide feedback for that 05:39:15 nikhil: just filed issues for conv2d and matmul 05:39:28 https://github.com/webmachinelearning/webnn/issues/27 05:39:34 https://github.com/webmachinelearning/webnn/issues/28 05:39:57 ... will move forward with issues #27 and #28 05:40:45 Topic: MLIR 05:41:07 nikhil: disclaimer, I'm not a compiler person, but talked with Google experts on that field 05:41:22 RRSAgent, draft minutes v2 05:41:22 I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik 05:42:07 nikhil: we'll not proposing MLIR, just exploring this area 05:42:14 do you have a link to the slides? 05:43:00 -> https://docs.google.com/presentation/d/1vv-pFsTqAVITtx3RwmEs-g7YRK1PD9APSIuice88aSI/ MLIR slides by Nikhil 05:43:20 [nikhil presenting MLIR slides] 05:44:49 ???: XLA compiler spits out LLVM IR already? 05:44:54 nikhil: correct 05:45:04 yoshiaki has joined #webmachinelearning 05:46:11 ... Domain specific optimizations, progressive lowering 05:46:34 ... The TensorFlow compiler ecosystem has many “Graph” IRs, each with challenges 05:47:47 ... Domain Specific IRs, Great! High-level domain-specific optimizations; Progressive lowering encourages reuse between levels 05:48:24 ... Not great! 05:48:29 ... Huge expense to build this infrastructure 05:48:34 ... Reimplementation of all the same stuff: 05:48:40 ... pass managers, location tracking, use-def chains, inlining, 05:48:47 ... constant folding, CSE, testing tools, …. 05:48:51 ... Innovations in one community don’t benefit the others 05:49:19 nikhil: let's talk about what is MLIR 05:50:21 ... TensorFlow 05:50:21 ... "An open source machine learning framework for everyone" 05:50:21 ... Multi-Level Intermediate Representation 05:50:21 ... "An open source program optimization framework for ... everyone" 05:50:21 ... Abstraction Building Toolkit 05:50:22 ... Reusable set of compiler passes for higher abstractions 05:50:22 ... Targeting analysis/program optimization/code generation 05:50:22 ... Open governance and part of LLVM 05:50:48 nikhil: MLIR has wide support across industry 05:51:09 yoshiaki_ has joined #webmachinelearning 05:51:19 nikhil: Extensible Operations Allow Multi-Level IR 05:52:37 jc has joined #webmachinelearning 05:52:43 ... MLIR “Dialects”: Families of defined operations 05:53:16 ... Example Dialects: 05:53:16 ... TensorFlow, LLVM IR, XLA HLO, TF Lite, Swift SIL… 05:53:16 ... Dialects can define: 05:53:16 ... Sets of defined operations 05:53:16 ... Entirely custom type system 05:53:16 ... Customization hooks 05:53:16 ... Constant folding, decoding 05:53:18 ... Operation can define: 05:53:18 ... Invariants on # operands, results, attributes, etc 05:53:18 ... Custom parser, printer, verifier, … 05:53:37 nikhil: MLIR Type System - some examples 05:53:39 yoshiaki has joined #webmachinelearning 05:53:58 ... Scalars: 05:53:58 ... f16, bf16, f32, … i1, i8, i16, i32, … i3, i4, i7, i57, … 05:53:58 ... Vectors: 05:53:58 ... vector<4 x f32> vector<4x4 x f16> etc. 05:53:59 ... Tensors, including dynamic shape and rank: 05:53:59 ... tensor<4x4 x f32> tensor<4x?x?x17x? x f32> tensor<* x f32> 05:53:59 ... Others: functions, memory buffers, quantized integers, other ... TensorFlow stuff, ... 05:53:59 ... Extensible!! 05:55:58 nikhil: Applications of MLIR 05:56:05 ... TensorFlow Lite Converter 05:56:30 ... One of the focusses: Usability 05:56:45 ... Usability of TOCO top complaint among TFLite users 05:56:46 ... Debugging 05:56:53 ... Report why a model failed to convert 05:57:01 ... Dialect types enable more checking & better reporting 05:57:03 yoshiaki has joined #webmachinelearning 05:58:51 ... [MLIR] for the Web? 05:59:08 ... Some facts from MLIR investigations 05:59:14 ... Operator expansion is about 25% YoY for TensorFlow 05:59:20 ... Hardware vendors will implement dialects 05:59:50 ... Open governance 06:00:29 riju: regarding operator expansion, is there a fallback mechanism, even if with performance penalty? 06:00:37 nikhil: we'd need to use e.g. a Wasm polyfill 06:01:36 ... MLIR dialect on the web -- thoughts 06:02:13 ... No backwards compatible guarantees today from MLIR 06:02:13 ... A dialect could be invented that is backwards compatible 06:02:13 ... What does maintaining this look like? 06:02:13 ... Web sourcemaps => python code 06:02:13 ... Immediately tells you whether python code will execute in browser 06:02:28 kenneth: web needs backwards compat, and we do not really do versioning on the Web 06:02:46 nikhil: how maintaining backwards compatibility could happen? 06:03:00 jc has joined #webmachinelearning 06:03:42 dino: LLVM IR is a well-suited as a web transport format 06:04:20 yoshiaki_ has joined #webmachinelearning 06:04:20 ^ *not* well-suited? 06:04:31 ... a lot of lowering, what is the improvement? 06:04:49 s/well-suited/not well-suited/ 06:05:18 RRSAgent, draft minutes v2 06:05:18 I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik 06:07:05 dino: what is the scope of the group, all models interop with all devices? 06:07:22 sushrajaMSFT has joined #webmachinelearning 06:08:39 ... we could start with a set of ops everyone supports 06:09:05 nikhil: initially we wanted to support all ops 06:09:24 ... then understood better growing the set slowly is a better approach 06:10:14 dino: our fear is, and I can be wrong, if the ecosystem becomes skewed toward TF models, so that those get hardware acceleration while some other models might not 06:10:31 nikhil: as a group we can grow that set so that it does not happen 06:10:57 dino: TF is growing fast, how's hardware adding ops? 06:11:20 nikhil: I think hardware vendors add new ops more slowly 06:11:34 kenneth: do any ops go away with time? 06:12:02 riju: any kind of ranking within these ops, what are used the most? 06:12:07 jc has joined #webmachinelearning 06:12:15 nikhil: TF has it, not sure if can make that data public 06:13:56 Philip: Swift for TF was good experience from usability perspecticve 06:14:13 ... ML not a domain of data scientists for any longer, need good dev ergonomics 06:14:30 Franck has joined #webmachinelearning 06:14:43 ningxin_hu: on which level of abstraction would the Web dialect of MLIR sit on? 06:15:35 HelloFillip has joined #webmachinelearning 06:15:39 nikhil: lower level things would evolve more slowly, but not sure at this point on which level the web dialect should be at 06:16:01 dino: generally Apple's position is that a high-level abstraction works well on the Web since it allows implementations to optimize 06:16:18 ... we don't have a huge dataset, but JS is a good example 06:16:34 ... no enough data yet how Wasm goes 06:16:53 ... if we did a Web dialect, it would be something like that, but we'd make it a bit more higher-level than LLVM IR 06:17:37 nikhil: I'm wondering whether there's a level of abstraction between ops and LLVM IR we should target 06:19:44 zkis has joined #webmachinelearning 06:20:30 anssik: what would be good next steps for the group re MLIR tasks? 06:20:48 nikhil: talking to MLIR people, it seems a bit too early still since moving target 06:21:46 ... concretely, I can try to figure out which ops are used, how many times an op is called 06:22:30 yuta has joined #webmachinelearning 06:22:38 RRSAgent, draft minutes v2 06:22:38 I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik 06:24:01 RRSAgent, draft minutes v2 06:24:01 I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik 06:24:24 HelloFillip has joined #webmachinelearning 06:24:30 The link to Chris's talk on Swift for TensorFlow can be found here (as an example for other languages): https://www.youtube.com/watch?v=s65BigoMV_I 06:25:56 we'll defer Day 1 3rd topic "operation set" to Day 2 on Friday 06:26:09 thanks for attending, we'll see again on Friday! 06:26:15 Topic: Adjourn 06:26:17 RRSAgent, draft minutes v2 06:26:17 I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik 06:26:22 Thanks Anssi! 06:29:10 Present+ Nikhil_Thorat 06:29:12 RRSAgent, draft minutes v2 06:29:12 I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik 06:31:03 jc has joined #webmachinelearning 06:32:09 Present+ Heejin_Chung 06:32:53 Present+ Philip_Laszkowicz 06:33:11 Present+ Diogo_Cortiz 06:33:36 Present+ Dean_Jackson 06:34:05 Present+ Wooglae_Kim 06:34:25 RRSAgent, draft minutes v2 06:34:25 I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik 06:36:01 Present+ David_Marsh 06:36:25 Present+ Kenneth_Christiansen 06:37:15 Present+ Wenson_Hsieh 06:37:38 Present+ A 06:37:42 dino has joined #webmachinelearning 06:37:55 Present+ Takio_Yamaoka 06:38:02 s/Present+ A// 06:38:25 Present+ Sangwhan_Moon 06:39:36 Present+ Belem_Zhang_(remote) 06:39:54 Present+ James_Darpinian_(remote) 06:39:59 RRSAgent, draft minutes v2 06:39:59 I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik 06:41:02 Present+ Frank_? 06:41:13 RRSAgent, draft minutes v2 06:41:13 I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik 06:50:29 jc has joined #webmachinelearning 06:50:38 jc has joined #webmachinelearning 06:57:35 jc has joined #webmachinelearning 06:58:29 yoshiaki has joined #webmachinelearning 07:02:33 jc has joined #webmachinelearning 07:03:55 whsieh has joined #webmachinelearning 07:07:18 dino has joined #webmachinelearning 07:11:16 whsieh has joined #webmachinelearning 07:11:35 whsieh has left #webmachinelearning 07:18:22 yoshiaki has joined #webmachinelearning 07:38:12 jc has joined #webmachinelearning 07:41:15 yoshiaki has joined #webmachinelearning 07:44:09 Chunming has joined #webmachinelearning 07:44:43 jc has joined #webmachinelearning 07:54:45 jc has joined #webmachinelearning 08:12:09 yoshiaki_ has joined #webmachinelearning 08:13:59 jc has joined #webmachinelearning 08:34:52 jc has joined #webmachinelearning 08:47:20 jc has joined #webmachinelearning 08:51:01 jc has joined #webmachinelearning 09:02:10 jc has joined #webmachinelearning 09:02:24 Zakim has left #webmachinelearning 09:22:30 yoshiaki has joined #webmachinelearning 09:47:21 yoshiaki has joined #webmachinelearning 10:21:34 zkis has joined #webmachinelearning 11:58:14 zkis has joined #webmachinelearning 12:11:12 Chunming has joined #webmachinelearning 12:30:16 Chunming has joined #webmachinelearning 12:32:46 zkis_ has joined #webmachinelearning 12:46:19 dino has joined #webmachinelearning 13:35:15 zkis has joined #webmachinelearning 13:55:14 Chunming has joined #webmachinelearning 14:33:24 yoshiaki has joined #webmachinelearning 17:49:47 zkis has joined #webmachinelearning 18:21:17 zkis_ has joined #webmachinelearning 18:29:54 zkis_ has joined #webmachinelearning 18:47:57 zkis__ has joined #webmachinelearning 19:05:15 zkis__ has joined #webmachinelearning 19:24:46 zkis_ has joined #webmachinelearning