
Publications and Research
Document Type
Article
Publication Date
Spring 5-5-2025
Abstract
Efficiency is essential to support responsiveness w.r.t. ever-growing datasets, especially for Deep Learning (DL) systems. DL frameworks have traditionally embraced deferred execution-style DL code—supporting symbolic, graph-based Deep Neural Network (DNN) computation. While scalable, such development is error-prone, non-intuitive, and difficult to debug. Consequently, more natural, imperative DL frameworks encouraging eager execution have emerged but at the expense of run-time performance. Though hybrid approaches aim for the "best of both worlds," using them effectively requires subtle considerations to make code amenable to safe, accurate, and efficient graph execution—avoiding performance bottlenecks and semantically inequivalent results. We discuss the engineering aspects of a refactoring tool that automatically determines when it is safe and potentially advantageous to migrate imperative DL code to graph execution and vice-versa.
Included in
Artificial Intelligence and Robotics Commons, Programming Languages and Compilers Commons, Software Engineering Commons
Comments
To appear in the formal tool demonstration track of the 2025 International Conference on Fundamental Approaches to Software Engineering (FASE '25) to be held with the 2025 International Joint Conferences On Theory and Practice of Software (ETAPS '25).