Episode 3:
Watch all of Episode 3Play this playlist for the intro, the run-down on what we’ll learn and do in Episode 3, the show and tell, the interview, and the eight part lab.
Episode 3.1:
Run-downBen gives a quick run-down on what we’ll learn and build today, the technologies we’ll use, who we’ll be talking to and what they built.
Tools and technology we’ll use today:
Episode 3.2:
Show & TellBen shows off some Tensorflow.js pose capture demos including bodypose, facemesh, and handpose before sitting down to chat with Google’s Developer Evangelist. And then, of course, he gets caught up in animating himself as an SVG boy with https://github.com/yemount/pose-animator.
Episode 3.3:
Interview with Jason MayesBen chats with Tensorflow.js’s lead developer advocate to find out what TFJS can do from an easy but incredibly useful hello world application, all the way to the cutting edge of what ML can do today. Jason Mayes talks all about his early days on the WebML creating superpowers on the web as well as his lifelong goals of taking to the skies in every way possible.
Links:
Episode 3.4:
Lab Step 1 - Project TemplatesThis time we’re using a template/starter project and not starting from an empty project. We try out both OpenWC’s generator (https://open-wc.org/docs/development/generator/) and Lit’s Typescript project template (https://github.com/lit/lit-element-starter-ts).
Each starter kit comes with a way to create docs, and so we go through a quick review of how either Storybook or 11ty docs are generated.
Episode 3.5:
Lab Step 2 - Making a video playerIt’s time to start building out our first component to simply play video, and we do this by building off of and changing what the Lit template provides. We also discuss building this component with Web Standard APIs only - specifically, not using Lit here because it doesn’t make sense in the context of not that much DOM or UI controls. But with that comes a little extra work that Lit would typically smooth over for us.
After taking the first stab at a video player, we tweak the Lit provided dev demo page to show our component off during development! Next, we organize some of our transpiled source. It gets sent to the root of the project and isn’t intended to be published, so we edit our .gitignore file a bit to reflect that.
Episode 3.6:
Lab Step 3 - Adding the player controlsIn this step we create some playback controls. This is the one piece that we DO use Google’s Lit for. It’s helpful in terms of several stateful UI controls we have here. But, it’s the only place we do use Lit, and these playback controls are actually optional for the larger video player component. These controls are enabled by the developer putting them into the video component tag as a child (or not). This is enabled by using “slots” in our video component.
As we wire up events on these UI controls, the concept of subclassed events are introduced. These are much better than the old way of creating custom events, in that the subclassed Event can contain custom logic and functionality right inside of the event, rather than logic spread all over the place as we did before.
Episode 3.7:
Lab Step 4 - Fixing timeline scrubbingIt turns out that our video player can’t scrub properly because of the lack of partial range requests on web-dev-server. This is fairly unique to our dev environment. And it turns out that we can install some middleware to handle this. We can grab this from a number of Koa.js packages listed on Koa’s github page, and this is because web-dev-server is actually based on Koa.js
Episode 3.8:
Lab Step 5 - Working with TFJS as an ES ModuleHere’s the part that broke the Web Compon-o-tron! Tensorflow.js has some difficulties being imported as an ES Module, so we’re going to try a Rollup bundling step of just those TFJS libraries to get it working in our project, and we’ll have these bundled imports as source files that we can import easily.
Episode 3.9:
Lab Step 6 - Realtime Pose detectionIn this step we extend our object oriented hierarchy further. We build on top of video-element to create a videopose-element, and extend THAT out to create our specific pose solutions for hands, face, and body. We get all of these Tensorflow models working with a live HTML5 video element as the source. This means realtime pose detection! We also normalize the points that are coming from these TFJS models such that they all follow a similar format. This will be important in the final steps!
Episode 3.10:
Lab Step 7 - Visualization and Pose PlaybackIn this (almost) final step, we begin with all of our TFJS pose models working, but they only appear in the console. So now, we build out a visualization canvas to overlay over our video. This is another optional layer we can choose (or choose not) to add as a child element which occupies the slot of our component.
We also try out the record function of our component and can download a full recording, with audio if preferred, of the pose we capture. Like where we started this step, just seeing data in the console, this download is just points. So we finish up by build another video-like player, but this player plays back just the captured points with an audio track if captured.
With everything fully working, we could almost end here, but there are some final steps to make this ready to publish on NPM and consume in an application.
Episode 3.11:
Lab Step 8 - Prepping for PublishingThis final step is basically just the main branch of this project, so, no github link for this specific step. There were only a couple of things to wrap up this set of TFJS pose detection components. First, we needed to make sure our Rollup bundled Tensorflow modules can be consumed from both the “src” folder for dev, and from the root of the project when using the component.
And then finally to close things out, we dive into creating documentation with the 11ty static site generator (https://www.11ty.dev), and I show off a simple demo app that consumes our new set of motion capture components.
Season 1 Episode 3
Posted on:August 30, 2022 at 03:00 PM