Repository logo
 

Publications

Permanent URI for this collectionhttps://hdl.handle.net/10217/239510

Browse

Recent Submissions

Now showing 1 - 20 of 29
  • ItemOpen Access
    Segmentation and immersive visualization of brain lesions using deep learning and virtual reality
    (Colorado State University. Libraries, 2025-01-19) Kelley, Brendan, author; Plabst, Lucas, author; Plabst, Lena, author; ACM, publisher
    Magnetic resonance imaging (MRIs) are commonly used for diagnosing potential neurological disorders, however preparation and interpretation of MRI scans requires professional oversight. Additionally, MRIs are typically viewed as single cross sections of the affected regions which does not always capture the full picture of brain lesions and can be difficult to understand due to 2D's inherent abstraction of our 3D world. To address these challenges we propose a immersive visualization pipeline that combines deep learning image segmentation techniques using a VGG-16 model trained on MRI fluid attenuated inversion recovery (FLAIR) with virtual reality (VR) immersive analytics. Our visualization pipeline begins with our VGG-16 model predicting which regions of the brain are potentially affected by a disease. This output, along with the original scan, are then volumentrically rendered. These renders can then be viewed in VR using an head mounted display (HMD). Within the HMD users can move through the volumentric renderings to view the affected regions and utilize planes to view cross sections of the MRI scans. Our work provides a potential pipeline and tool for diagnosis and care.
  • ItemOpen Access
    Automatic hardware pragma insertion in high-level synthesis: a non-linear programming approach
    (Colorado State University. Libraries, 2025-02-07) Pouget, Stéphane, author; Pouchet, Louis-Noël, author; Cong, Jason, author; ACM, publisher
    High-Level Synthesis enables the rapid prototyping of hardware accelerators, by combining a high-level description of the functional behavior of a kernel with a set of micro-architecture optimizations as inputs. Such optimizations can be described by inserting pragmas e.g., pipelining and replication of units, or even higher level transformations for HLS such as automatic data caching using the AMD/Xilinx Merlin compiler. Selecting the best combination of pragmas, even within a restricted set, remains particularly challenging and the typical state-of-practice uses design-space exploration to navigate this space. But due to the highly irregular performance distribution of pragma configurations, typical DSE approaches are either extremely time consuming, or operating on a severely restricted search space. This work proposes a framework to automatically insert HLS pragmas in regular loop-based programs, supporting pipelining, unit replication, and data caching. We develop an analytical performance and resource model as a function of the input program properties and pragmas inserted, using non-linear constraints and objectives. We prove this model provides a lower bound on the actual performance after HLS. We then encode this model as a Non-Linear Program, by making the pragma configuration unknowns of the system, which is computed optimally by solving this NLP. This approach can also be used during DSE, to quickly prune points with a (possibly partial) pragma configuration, driven by lower bounds on achievable latency. We extensively evaluate our end-to-end, fully implemented system, showing it can effectively manipulate spaces of billions of designs in seconds to minutes for the kernels evaluated.
  • ItemOpen Access
    Maximal simplification of polyhedral reductions
    (Colorado State University. Libraries, 2025-01-09) Narmour, Louis, author; Yuki, Tomofumi, author; Rajopadhye, Sanjay, author; ACM, publisher
    Reductions combine collections of input values with an associative and often commutative operator to produce collections of results. When the same input value contributes to multiple outputs, there is an opportunity to reuse partial results, enabling reduction simplification. Simplification often produces a program with lower asymptotic complexity. Typical compiler optimizations yield, at best, a constant fold speedup, but a complexity improvement from, say, cubic to quadratic complexity yields unbounded speedup for sufficiently large problems. It is well known that reductions in polyhedral programs may be simplified automatically, but previous methods cannot exploit all available reuse. This paper resolves this long-standing open problem, thereby attaining minimal asymptotic complexity in the simplified program. We propose extensions to prior work on simplification to support any independent commutative reduction. At the heart of our approach is piece-wise simplification, the notion that we can split an arbitrary reduction into pieces and then independently simplify each piece. However, the difficulty of using such piece-wise transformations is that they typically involve an infinite number of choices. We give constructive proofs to deal with this and select a finite number of pieces for simplification.
  • ItemOpen Access
    A unified framework for automated code transformation and pragma insertion
    (Colorado State University. Libraries, 2025-02-27) Pouget, Stéphane, author; Pouchet, Louis-Noël, author; Cong, Jason, author; ACM, publisher
    High-Level Synthesis compilers and Design Space Exploration tools have greatly advanced the automation of hardware design, improving development time and performance. However, achieving a good Quality of Results still requires extensive manual code transformations, pragma insertion, and tile size selection, which are typically handled separately. The design space is too large to be fully explored by this fragmented approach. It is too difficult to navigate this way, limits the exploration of potential optimizations, and complicates the design generation process. To tackle this obstacle, we propose Sisyphus, a unified framework that automates code transformation, pragma insertion, and tile size selection within a common optimization framework. By leveraging Nonlinear Programming, our approach efficiently explores the vast design space of regular loop-based kernels, automatically selecting loop transformations and pragmas that minimize latency. Evaluation against state-of-the-art frameworks, including AutoDSE, NLP-DSE, and ScaleHLS, shows that Sisyphus achieves superior Quality of Results, outperforming alternatives across multiple benchmarks. By integrating code transformation and pragma insertion into a unified model, Sisyphus significantly reduces design generation complexity and improves performance for FPGA-based systems.
  • ItemOpen Access
    Lights, headset, tablet, action: exploring the use of hybrid user interfaces for immersive situated analytics
    (Colorado State University. Libraries, 2024-10-24) Zhou, Xiaoyan, author; Lee, Benjamin, author; Ortega, Francisco R., author; Batmaz, Anil Ufuk, author; Yang, Yalong, author; ACM, publisher
    While augmented reality (AR) headsets provide entirely new ways of seeing and interacting with data, traditional computing devices can play a symbiotic role when used in conjunction with AR as a hybrid user interface. A promising use case for this setup is situated analytics. AR can provide embedded views that are integrated with their physical referents, and a separate device such as a tablet can provide a familiar situated overview of the entire dataset being examined. While prior work has explored similar setups, we sought to understand how people perceive and make use of visualizations presented on both embedded visualizations (in AR) and situated visualizations (on a tablet) to achieve their own goals. To this end, we conducted an exploratory study using a scenario and task familiar to most: adjusting light levels in a smart home based on personal preference and energy usage. In a prototype that simulates AR in virtual reality, embedded visualizations are positioned next to lights distributed across an apartment, and situated visualizations are provided on a handheld tablet. We observed and interviewed 19 participants using the prototype. Participants were easily able to perform the task, though the extent the visualizations were used during the task varied, with some making decisions based on the data and others only on their own preferences. Our findings also suggest the two distinct roles that situated and embedded visualizations can have, and how this clear separation might improve user satisfaction and minimize attention-switching overheads in this hybrid user interface setup. We conclude by discussing the importance of considering the user's needs, goals, and the physical environment for designing and evaluating effective situated analytics applications.
  • ItemOpen Access
    Integrating soft skills training into your course through a collaborative activity
    (Colorado State University. Libraries, 2025-02-18) Brieven, Géraldine, author; Moraes, Marcia, author; Pawelczak, Dieter, author; Vasilache, Simona, author; Donnet, Benoit, author; ACM, publisher
    Nowadays, employers highly value soft skills, yet many students lack these fundamental abilities. Teaching soft skills involves fostering active student participation and facilitating communication of technical knowledge among peers. This approach presents challenges: (i) creating an engaging learning environment; (ii) ensuring students get timely feedback; (iii) finding an approach that is not too time-consuming for instructors to prepare. The Collaborative Design & Build (CDB) activity, described in this paper, was designed to respond to these challenges. It simulates a real-life scenario, triggering students' interest. The success of this collaborative activity hinges on students working together in a structured chain, where each team builds upon and contributes to the success of the others. This fosters student engagement and accountability as they realize the impact of their actions on the entire chain. This pedagogical approach has already been adopted by four universities abroad. This paper shows how it can be deployed in different courses. Finally, it also discusses how students perceived the activity through four soft skills: collaboration, communication, problem solving and critical thinking. These skills were selected based on their relevance, both in the context of the collaborative activity and in the job market. They are also aligned with the ''4C's of 21st Century skills''. Results show that while students initially struggled with soft skills, consistent practice throughout the semester boosted their confidence, especially in communication. This makes the activity particularly relevant in the classroom, as communication is considered as the most important soft skill for the future.
  • ItemOpen Access
    Towards synthesis of application-specific forward error correction (FEC) codes
    (Colorado State University. Libraries, 2024-11-18) McClurg, Jedidiah, author; Baker, Lauren Zoe, author; Canizales, Ronaldo, author; Karki, Dilochan, author; ACM, publisher
    Forward error correction (FEC) is a key component of modern high-bandwidth networks. Typically implemented at the physical layer, FEC attaches error-correcting codes to blocks of transmitted data, allowing some corrupted blocks to be repaired without retransmission. We outline a synthesis-based approach for automatic exploration of the FEC-code design space, focusing on Hamming codes. We formally verify the correctness of a Hamming (128, 120) code used for FEC in the recent 802.3df Ethernet standard, and provide preliminary evidence that our prototype synthesizer can leverage user-provided formal properties to generate FEC codes that are highly robust, efficiently implementable, and tuned to support specific data formats such as IEEE floating points.
  • ItemOpen Access
    Attacks and defenses for large language models on coding tasks
    (Colorado State University. Libraries, 2024-10-27) Zhang, Chi, author; Wang, Zifan, author; Zhao, Ruoshi, author; Mangal, Ravi, author; Fredrikson, Matt, author; Jia, Limin, author; Pasareanu, Corina, author; ACM, publisher
    Modern large language models (LLMs), such as ChatGPT, have demonstrated impressive capabilities for coding tasks, including writing and reasoning about code. They improve upon previous neural network models of code, such as code2seq or seq2seq, that already demonstrated competitive results when performing tasks such as code summarization and identifying code vulnerabilities. However, these previous code models were shown vulnerable to adversarial examples, i.e., small syntactic perturbations designed to "fool" the models. In this paper, we first aim to study the transferability of adversarial examples, generated through white-box attacks on smaller code models, to LLMs. We also propose a new attack using an LLM to generate the perturbations. Further, we propose novel cost-effective techniques to defend LLMs against such adversaries via prompting, without incurring the cost of retraining. These prompt-based defenses involve modifying the prompt to include additional information, such as examples of adversarially perturbed code and explicit instructions for reversing adversarial perturbations. Our preliminary experiments show the effectiveness of the attacks and the proposed defenses on popular LLMs such as GPT-3.5 and GPT-4.
  • ItemOpen Access
    Predicting attrition among software professionals: antecedents and consequences of burnout and engagement
    (Colorado State University. Libraries, 2024-12) Trinkenreich, Bianca, author; Santos, Fabio, author; Stol, Klaas-Jan, author; ACM, publisher
    In this study of burnout and engagement, we address three major themes. First, we offer a review of prior studies of burnout among IT professionals and link these studies to the Job Demands-Resources (JD-R) model. Informed by the JD-R model, we identify three factors that are organizational job resources and posit that these (a) increase engagement and (b) decrease burnout. Second, we extend the JD-R by considering software professionals' intention to stay as a consequence of these two affective states, burnout and engagement. Third, we focus on the importance of factors for intention to stay, and actual retention behavior. We use a unique dataset of over 13,000 respondents at one global IT organization, enriched with employment status 90 days after the initial survey. Leveraging partial-least squares structural quation modeling and machine learning, we find that the data mostly support our theoretical model, with some variation across different subgroups of respondents. An importance-performance map analysis suggests that managers may wish to focus on interventions regarding burnout as a predictor of intention to leave. The Machine Learning model suggests that engagement and opportunities to learn are the top two most important factors that explain whether software professionals leave an organization.
  • ItemOpen Access
    DeepSoil: a science-guided framework for generating high precision soil moisture maps by reconciling measurement profiles across in-situ and remote sensing
    (Colorado State University. Libraries, 2024-10-29) Khandelwal, Paahuni, author; Pallickara, Sangmi Lee, author; Pallickara, Shrideep, author; ACM, publisher
    Soil moisture plays a critical role in several domains and can be used to inform decision-making in agricultural settings, drought forecasting, forest fire predictions, and water conservation. Soil moisture is measured using in-situ and remote-sensing equipment. Depending on the type of equipment that is used, some challenges must be reconciled, including the density of observations, the measurement precision, and the resolutions at which these measurements are available. In particular, in-situ measurements are high-precision but sparse, while remote sensing measurements benefit from spatial coverage, albeit at lower precision and coarser resolutions. The crux of this study is to produce higher-precision soil moisture estimates at high resolutions (30m). Our methodology combines scientific models, deep networks, topographical characteristics, and information about ambient conditions alongside both in-situ and remote sensing data to accomplish this. Domain science infuses several aspects of our methodology. Our empirical benchmarks profile several aspects and demonstrate that our methodology accounts for spatial variability while accounting for both static (soil properties and elevation) and dynamically varying phenomena to generate accurate, high-precision 30m resolution soil moisture content maps.
  • ItemOpen Access
    The restorative influence of virtual reality environment design
    (Colorado State University. Libraries, 2024-08-30) Nicoly, Jalynn Blu, author; Masters, Rachel, author; Gaddy, Vidya, author; Interrante, Victoria, author; Ortega, Francisco, author; ACM, publisher
    Virtual reality (VR) could support the need for easily accessible therapeutic techniques, such as viewing art and immersing oneself in nature. Our study searches for the optimal virtual environment (VE) by exploring whether beauty in moving and still VEs contributes to stress reduction and perceived restorativeness. We hypothesized that the moving forest environment would result in the most stress reduction, while the abstract art would result in the least, with additional comparisons to a still forest environment and a control condition. The control condition took place outside the virtual headset to simulate what stress reduction would look like without a nature intervention. After working with 78 participants, we found an increase in statistical significance for stress reduction and perceived restorativeness in the moving forest condition compared to the control, as measured by the Zuckerman Inventory of Personal Reactions (ZIPERS) positive affect and the Perceived Restorativeness Scale (PRS). Additionally, the PRS and heart rate measures showed greater restorativeness in the moving forest condition than in the abstract art condition. Heart rate measures also showed statistical significance between the forest image condition and the control and moving forest conditions.
  • ItemOpen Access
    Combating spatial disorientation in a dynamic self-stabilization task using AI assistants
    (Colorado State University. Libraries, 2024-11-24) Mannan, Sheikh Abdul, author; Hansen, Paige, author; Vimal, Vivekanand Pandey, author; Davies, Hannah N., auhtor; DiZio, Paul, author; Krishnaswamy, Nikhil, author; ACM, publisher
    Spatial disorientation is a leading cause of fatal aircraft accidents. This paper explores the potential of AI agents to aid pilots in maintaining balance and preventing unrecoverable losses of control by offering cues and corrective measures that ameliorate spatial disorientation. A multi-axis rotation system (MARS) was used to gather data from human subjects self-balancing in a spaceflight analog condition. We trained models over this data to create "digital twins" that exemplified performance characteristics of humans with different proficiency levels. We then trained various reinforcement learning and deep learning models to offer corrective cues if loss of control is predicted. Digital twins and assistant models then co-performed a virtual inverted pendulum (VIP) programmed with identical physics. From these simulations, we picked the 5 best-performing assistants based on task metrics such as crash frequency and mean distance from the direction of balance. These were used in a co-performance study with 20 new human subjects performing a version of the VIP task with degraded spatial information. We show that certain AI assistants were able to improve human performance and that reinforcement-learning based assistants were objectively more effective but rated as less trusted and preferable by humans.
  • ItemOpen Access
    The impact of nature realism on the restorative quality of virtual reality forest bathing
    (Colorado State University. Libraries, 2024-11) Masters, Rachel, author; Nicoly, Jalynn, author; Gaddy, Vidya, author; Interrante, Victoria, author; Ortega, Francisco, author; ACM, publisher
    Virtual reality (VR) forest bathing for stress relief and mental health has recently become a popular research topic. As people spend more of their lives indoors and have less access to the restorative benefit of nature, having a VR nature supplement has the potential to improve quality of life. However, the optimal design of VR nature environments is an active area of investigation with many research questions to be explored. One major issue with VR is the difficulty of rendering high-fidelity assets in real time without causing cybersickness, or VR motion sickness, within the headset. Due to this limitation, we investigate if the realism of VR nature is critical for the restorative effects by comparing a low-realism nature environment to a high-realism nature environment. We only found a significant difference in the perceived restorativeness of the two environments, but after observing trends in our data toward the stress reduction potential of the high-realism environment, we suggest exploring more varieties of high and low-realism environments in future work to investigate the full potential of VR and how people respond.
  • ItemOpen Access
    Fast and scalable monitoring for value-freeze operator augmented signal temporal logic
    (Colorado State University. Libraries, 2024-05-14) Ghorbel, Bassem, author; Prabhu, Vinayak S., author; ACM, publisher
    Signal Temporal Logic (STL) is a timed temporal logic formalism that has found widespread adoption for rigorous specification of properties in Cyber-Physical Systems. However, STL is unable to specify oscillatory properties commonly required in engineering design. This limitation can be overcome by the addition of additional operators, for example, signal-value freeze operators, or with first order quantification. Previous work on augmenting STL with such operators has resulted in intractable monitoring algorithms. We present the first efficient and scalable offline monitoring algorithms for STL augmented with independent freeze quantifiers. Our final optimized algorithm has a |ρ|log(|ρ|) dependence on the trace length |ρ| for most traces ρ arising in practice, and a |ρ|2 dependence in the worst case. We also provide experimental validation of our algorithms – we show the algorithms scale to traces having 100k time samples.
  • ItemOpen Access
    A framework for profiling spatial variability in the performance of classification models
    (Colorado State University. Libraries, 2024-04-03) Warushavithana, Menuka, author; Barram, Kassidy, author; Carlson, Caleb, author; Mitra, Saptashwa, author; Ghosh, Sudipto, author; Breidt, Jay, author; Pallickara, Sangmi Lee, author; Pallickara, Shrideep, author; ACM, publisher
    Scientists use models to further their understanding of phenomena and inform decision-making. A confluence of factors has contributed to an exponential increase in spatial data volumes. In this study, we describe our methodology to identify spatial variation in the performance of classification models. Our methodology allows tracking a host of performance measures across different thresholds for the larger, encapsulating spatial area under consideration. Our methodology ensures frugal utilization of resources via a novel validation budgeting scheme that preferentially allocates observations for validations. We complement these efforts with a browser-based, GPU-accelerated visualization scheme that also incorporates support for streaming to assimilate validation results as they become available.
  • ItemOpen Access
    RUBIKS: rapid explorations and summarization over high dimensional spatiotemporal datasets
    (Colorado State University. Libraries, 2024-04-03) Mitra, Saptashwa, author; Young, Matt, author; Breidt, Jay, author; Pallickara, Sangmi, author; Pallickara, Shrideep, author; ACM, publisher
    Exponential growth in spatial data volumes have occurred alongside increases in the dimensionality of datasets and the rates at which observations are generated. Rapid summarization and explorations of such datasets are a precursor to several downstream operations including data wrangling, preprocessing, hypothesis formulation, and model construction among others. However, researchers are stymied both by the dimensionality and data volumes that often entail extensive data movements, computation overheads, and I/O. Here, we describe our methodology to support effective summarizations and explorations at scale over arbitrary spatiotemporal scopes, which encapsulate the spatial extents, temporal bounds, or combinations thereof over the data space of interest. Summarizations can be performed over all variables representing the dataspace or subsets specified by the user. We extend the concept of data cubes to encompass spatiotemporal datasets with high-dimensionality and where there might be significant gaps in the data because measurements (or observations) of diverse variables are not synchronized and may occur at diverse rates. We couple our data summarization features with a rapid Choropleth visualizer that allows users to explore spatial variations of diverse measures of interest. We validate these concepts in the context of an Environmental Protection Agency dataset which tracks over 4000 chemical pollutants, presenting in natural water sources across the United States from 1970 onwards.
  • ItemOpen Access
    Teach students to study using quizzes, study behavior visualization, and reflection: a case study in an introduction to programming course
    (Colorado State University. Libraries, 2024-01-15) Moraes, Marcia C., author; Lionelle, Albert, author; Ghosh, Sudipto, author; Folkestad, James E., author; ACM, publisher
    Due to a long history of using rote memorization and rereading as the primary means to study, students are coming to the University with misconceptions about study strategies that are beneficial for their performance and long-term learning. Techniques such as spaced retrieval practice, interleaving, and metacognition are proven by cognitive and educational researchers as strategies that greatly improve learning. They focus on helping students to own responsibility for their learning and retention of information. Considering their benefits, quizzes were re-branded to be formative low-stakes retrieval practice activities (RPAs) in an Introduction to Programming Course (CS1), meaning that students would use the quizzes as learning tools, testing themselves in a spaced and interleaved manner as many times as they want during the semester. Additionally, the U-Behavior learning and teaching method was used. This method applies visualizations of student's study habits and self-reflections to help students to be aware of their study practices, reflect on them, and change their study routine to improve performance and long-term learning. Study behaviors were analyzed and the final Canvas exam, final coding exam, and final course grades were compared for students who spaced and interleaved their practice with students who did not. Results showed a statistically significant increase in all grades evaluated for students who practiced using this novel combination of spacing and interleaving integrated with U-Behavior visualizations and RPA reflection activities for learning.
  • ItemOpen Access
    An artists' perspectives on natural interactions for virtual reality 3D sketching
    (Colorado State University. Libraries, 2024-05-11) Rodriguez, Richard, author; Sullivan, Brian T., author; Machuca, Mayra Donaji Barrera, author; Batmaz, Anil Ufuk, author; Tornatzky, Cyane, author; Ortega, Francisco R., author; ACM, publisher
    Virtual Reality (VR) applications like OpenBrush offer artists access to 3D sketching tools within the digital 3D virtual space. These 3D sketching tools allow users to "paint" using virtual digital strokes that emulate real-world mark-making. Yet, users paint these strokes through (unimodal) VR controllers. Given that sketching in VR is a relatively nascent field, this paper investigates ways to expand our understanding of sketching in virtual space, taking full advantage of what an immersive digital canvas offers. Through a study conducted with the participation of artists, we identify potential methods for natural multimodal and unimodal interaction techniques in 3D sketching. These methods demonstrate ways to incrementally improve existing interaction techniques and incorporate artistic feedback into the design.
  • ItemOpen Access
    Tiled bit networks: sub-bit neural network compression through reuse of learnable binary vectors
    (Colorado State University. Libraries, 2024-10-21) Gorbett, Matt, author; Shirazi, Hossein, author; Ray, Indrakshi, author; ACM, publisher
    Binary Neural Networks (BNNs) enable efficient deep learning by saving on storage and computational costs. However, as the size of neural networks continues to grow, meeting computational requirements remains a challenge. In this work, we propose a new form of quantization to tile neural network layers with sequences of bits to achieve sub-bit compression of binary-weighted neural networks. The method learns binary vectors (i.e. tiles) to populate each layer of a model via aggregation and reshaping operations. During inference, the method reuses a single tile per layer to represent the full tensor. We employ the approach to both fully-connected and convolutional layers, which make up the breadth of space in most neural architectures. Empirically, the approach achieves near full-precision performance on a diverse range of architectures (CNNs, Transformers, MLPs) and tasks (classification, segmentation, and time series forecasting) with up to an 8x reduction in size compared to binary-weighted models. We provide two implementations for Tiled Bit Networks: 1) we deploy the model to a microcontroller to assess its feasibility in resource-constrained environments, and 2) a GPU-compatible inference kernel to facilitate the reuse of a single tile per layer in memory.
  • ItemOpen Access
    Formal verification of source-to-source transformations for HLS
    (Colorado State University. Libraries, 2024-04-02) Pouchet, Louis-Noël, author; Tucker, Emily, author; Zhang, Niansong, author; Chen, Hongzheng, author; Pal, Debjit, author; Rodríguez, Gabriel, author; Zhang, Zhiru, author; ACM, publisher
    High-level synthesis (HLS) can greatly facilitate the description of complex hardware implementations, by raising the level of abstraction up to a classical imperative language such as C/C++, usually augmented with vendor-specific pragmas and APIs. Despite productivity improvements, attaining high performance for the final design remains a challenge, and higher-level tools like source-to-source compilers have been developed to generate programs targeting HLS toolchains. These tools may generate highly complex HLS-ready C/C++ code, reducing the programming effort and enabling critical optimizations. However, whether these HLS-friendly programs are produced by a human or a tool, validating their correctness or exposing bugs otherwise remains a fundamental challenge. In this work we target the problem of efficiently checking the semantics equivalence between two programs written in C/C++ as a means to ensuring the correctness of the description provided to the HLS toolchain, by proving an optimized code version fully preserves the semantics of the unoptimized one. We introduce a novel formal verification approach that combines concrete and abstract interpretation with a hybrid symbolic analysis. Notably, our approach is mostly agnostic to how control-flow, data storage, and dataflow are implemented in the two programs. It can prove equivalence under complex bufferization and loop/syntax transformations, for a rich class of programs with statically interpretable control-flow. We present our techniques and their complete end-to-end implementation, demonstrating how our system can verify the correctness of highly complex programs generated by source-to-source compilers for HLS, and detect bugs that may elude co-simulation.