Finding Bugs in Deep Learning: A New Approach with Citadel

Sat Dec 07 2024
Advertisement
Deep learning is everywhere these days, and so are the bugs in the tools we use to build and test these systems. Existing tools just don't cut it when it comes to finding performance bugs, which can really mess up training and using deep learning models. This is a tough nut to crack because it's hard to find examples of these bugs to test against. Enter Citadel, a new method that aims to speed up bug finding by focusing on similar bugs. It's like having a detective that looks for new bugs that are similar to ones we've already found and understood. Citadel starts by gathering reports on known bugs and pinpointing problematic APIs. It then defines something called context similarity to measure how alike different API pairs are. This helps Citadel automatically create test cases with examples of these bugs.
Citadel covers a lot of ground, looking at 1, 436 APIs in PyTorch and 5, 380 in TensorFlow. It manages to find 77 bugs in PyTorch and 74 in TensorFlow, many of which are performance bugs that other tools miss. What's more, a whopping 35. 40% of the test cases generated by Citadel actually trigger bugs, blowing the previous best method (3. 90%) out of the water.
https://localnews.ai/article/finding-bugs-in-deep-learning-a-new-approach-with-citadel-705defcf

actions