"Given a dataset and a trained model, this notebook allows the user to sample from the latent space distributions and generate video clips showcasing the results"
"Given a dataset, this notebook allows the user to \n",
"\n",
"* Load and process the dataset using deepof.data\n",
"* Visualize data quality with interactive plots\n",
"* Visualize training instances as multi-timepoint scatter plots with interactive configurations\n",
"* Visualize training instances as video clips with interactive configurations"
"In the cell above, you see the percentage of labels per body part which have a quality lower than the selected value (0.50 by default) **before** preprocessing. The values are taken directly from DeepLabCut."
" plt.title(\"Positions across time for centered data\")\n",
" plt.legend(\n",
" fontsize=15,\n",
" bbox_to_anchor=(1.5, 1),\n",
" title=\"Body part\",\n",
" title_fontsize=18,\n",
" shadow=False,\n",
" facecolor=\"white\",\n",
" )\n",
")"
"\n",
" plt.ylim(-100, 60)\n",
" plt.xlim(-60, 60)\n",
" plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The figure above is a multi time-point scatter plot. The time_slider allows you to scroll across the video, and the length_slider selects the number of time-points to include. The idea is to intuitively visualize the data that goes into a training instance for a given preprocessing setting."
" ax.set_title(\"Positions across time for centered data\")\n",
" ax.set_ylim(-100, 60)\n",
" ax.set_xlim(-60, 60)\n",
" ax.set_xlabel(\"x\")\n",
" ax.set_ylabel(\"y\")\n",
"\n",
"# plot_model(encoder, \"encoder\")\n",
"# plot_model(decoder, \"decoder\")\n",
"# plot_model(grouper, \"grouper\")\n",
"# plot_model(gmvaep, \"gmvaep\")"
" video = animation.to_html5_video()\n",
" html = display.HTML(video)\n",
" display.display(html)\n",
" plt.close()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3. Pass data through all models"
"The figure above displays exactly the same data as the multi time-point scatter plot, but in the form of a video (one training instance at the time)."
Given a dataset and a trained model, this notebook allows the user to sample from the latent space distributions and generate video clips showcasing the results
Given a dataset, this notebook allows the user to
* Load and process the dataset using deepof.data
* Visualize data quality with interactive plots
* Visualize training instances as multi-timepoint scatter plots with interactive configurations
* Visualize training instances as video clips with interactive configurations