[Security-meetings] Moritz Schloegel to Present at Cybersecurity Reading Group on March 24, 2025

Xinan Zhou xinan.zhou at email.ucr.edu
Fri Mar 21 15:13:00 PDT 2025


Hello everyone,

I'm excited to announce that on 3/24 4:00 pm (next Monday), Moritz Schloegel
<https://cybersecurity.cs.ucr.edu/moritz_schloegel.html> will be presenting
his work *SoK: Prudent Evaluation Practices for Fuzzing* (Distinguished
Paper Award at IEEE Symposium on Security and Privacy (S&P)) at
our Cybersecurity Reading Group.

This presentation will be fully online and we encourage folks to join the
online meeting at:
https://ucr.zoom.us/j/95161040878?pwd=i2jZaIGaVsmrohHZszNRBqJfzqM9a1.1

*About Moritz Schloegel*
Moritz Schloegel is a postdoctoral researcher working on systems security
in the SEFCOM lab at Arizona State University. His research focuses on
automatically finding and understanding bugs in software, with a particular
emphasis on fuzzing. Beyond working with bugs, Moritz is interested in all
sorts of program analysis problems, and he is a strong advocate of open
science and reproducibility in research. His works have received multiple
distinctions, including three distinguished paper awards and being
runner-up to the Internet Defense Prize. Before joining ASU in early 2025,
Moritz was at CISPA Helmholtz Center for Information Security and Ruhr
University Bochum, where he had obtained his PhD in May 2024 under the
supervision of Thorsten Holz.

*Abstract*
Over the past decade, fuzzing has established itself as one of the most
effective bug-finding techniques. Spurred by the introduction of coverage
feedback, fuzzing research has experienced a renaissance: Hundreds of
papers promised to improve almost all its aspects, boosting the fuzzers'
effectiveness and continuously pushing the boundaries of their
applicability. Yet, many techniques never found widespread adoption, and
anecdotal evidence points to biased or flawed evaluations, raising the
question of whether we can reproduce many of these proposed improvements.
At the same time, many papers inflate their perceived practical impact by
claiming CVEs, presumably to indicate that their new technique uncovered
various critical vulnerabilities in widely used software. Taking a closer
look, however, unveils many questionable CVE assignments. This talk
examines the reproducibility of fuzzing and highlights what can go wrong
during our evaluations. We will discuss various pitfalls, how to do better
in future, and whether CVEs are a good metric for showing real-world impact.

Contact & More Information
Moritz Schloegel <https://mschloegel.me/>

Thank you,
Xin'an Emmanuel Zhou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://fenris.cs.ucr.edu/pipermail/security-meetings/attachments/20250321/8ff9bd6c/attachment.htm>


More information about the Security-meetings mailing list