Coding Video: PyTorch C++ Anomaly Detection
Today, I invite you to join me implementing PyTorch Anomaly Detection for C++.
Quite a while ago, I recorded a screencast of fixing your first PyTorch bug where I fixed a bug live.
In a completely different thread of events, PyTorch has a neat feature for debugging your models when stuff happens in the backward, the autograd anomaly detection.And if you want to learn everything about Autograd and how to make the most of it, join my online course in November. This is great because when things go wrong during training, chances are that they do so in the backward. As this is a hot feature, it isn't surprising that (more than a year ago) someone asked how to use C++ Anomaly detection on the PyTorch forums. Every now and then I try to help out there (as a hommage to ptrblck and alband who do the heavy lifting and countless of experts helping out there generously), and so I thought I can find the C++ function being called from Python when you enable anomaly detectionI once wrote a blog post on the same topic for Tensor (ATen) functions. and posted the reply. But that led to segfaults and to make things worse, I let it slip.
Recently Douane Nielsen revived the thread by posting a short snipped to reproduce the segfault. So I thought it would be good fun to not only implement C++ anomaly detection but also record a video of me doing it.I discussed the strategy a bit with Alban Desmaison. Thank you for your input, Alban! But the errors in the implementation are all mine.
So without further ado, here is the video:
And the result is PyTorch PR 46981.
With that, happy debugging of your models!
Postscript
(Added November 3rd.) I'm quite happy to say that the PR was accepted and also a followup that implementes a guard-based approach (again, thak you, Alban!): You can now instantiate a guard variable torch::autograd::DetectAnomalyGuard detect_anomaly;
and have the detection turned on temporarily.
This comes on the heels of another bit not shown in the video: As the anomaly detection mode adds overhead to each and every PyTorch call, it is slow. I had forgotten to turn off anomaly detection in the test. This led to the other tests run in the same executable to fail and time out.
Given that I (arguably a leading C++ anomaly mode expert at the time, if only because nobody else had written code against it) had forgotten that, a guard like the above that ends the anomaly detection when it goes out of scope seems clearly preferable. I am glad that I made that embarrassing mistake so you don't have to!
Making of...
So I wanted to experiment a bit with OBS. As you can see, I'm really proud of the Python Video Source extension, the SVG renderer(s) written in it and the little effect box I made. Maybe I should make a blog or video of these, too, or publish the code.
If you like the video, you could hop over GitHub sponsors to see if you want more. Give me a shout if your target level of sponsorship isn't included in the options.
Autograd Course
I'll be releasing the ultimate PyTorch autograd course, with in-depth coverage of all the nifty features in November. Pre-register to not miss it.