3B – Hot Topic: Intelligent Physical Systems: Test, Diagnosis, Reconfiguration and Correction

Day: April, 10th 2017 Room: Pompeian II Time: 15:00 – 16:00
Organizer: Abhijit Chatterjee (Georgia Institute of Technology)
Context-Aware Self-Optimizing IoT Sensor Nodes
Speaker: Shreyas Sen (Purdue University)
Abstract: Following five decades of continued scaling, the size of unit computing has gone to virtually zero. In future, computing will be ubiquitous, in mostly invisible forms, such as distributed IoT sensors connected to the cloud. In this data-driven IoT revolution, workloads, operating conditions and computational/ communication demands on distributed and connected sensors will undergo ultra-large dynamic ranges of several orders of magnitude while being energy limited (energy harvested with variable amount of energy available at different times). This results in the following significant challenges and needs: (1) power management, particularly as the sensor nodes are expected to opportunistically run on harvested energy; (2) in-situ data analytics with data acquisition, feature extraction and classification with 100X improvement in energy-efficiency, (3) paradigm shifting advances in low power and adaptive radio technologies and (4) energy and context-aware real-time optimal control to provide seamless energy distribution between acquisition, computation and communication to meet target accuracy at the cloud back-end, while minimizing energy-cost/information.
This talk will present recent advances of such context-aware sensing, in-sensor computation and classification, adaptive communication including minimum power operation of IoT sensors nodes through dynamic self-optimization between computation and communication. The first part will highlight the challenges and opportunities of data acquisition in IoT nodes and focus on intelligent sensors with two interface modalities, namely, speech and vision. The second part will highlight advances in context-adaptive communication to support widely varying IoT sensor data loads supported by variable energy-availability to minimize energy/information at all times.
Error Detection and Learning-assisted Correction in Real-time Systems Using Algorithmic Encoding
Speaker: Abhijit Chatterjee (Georgia Institute of Technology)
Abstract: Real-time systems for wireless communication, digital signal processing and control experience a wide gamut of operating conditions and suffer from the effects of parametric deviations, soft errors and noise induced by design deficiencies or near thermal voltage operation of electronic circuits. Prior work on algorithm-based fault-tolerance (ABFT) focused on algorithm-encoding driven error diagnosis and correction (ABFT). This was followed by algorithmic noise tolerance (ANT) techniques that relied on state prediction and update methods (ANT) for correcting errors in the presence of voltage overscaling. In this talk, we focus on an error detection and correction approach that combines aspects of both ABFT and ANT. It uses algorithmic encoding techniques for error detection but performs error correction in a stochastic (probabilistic) sense similar to ANT without the need to perform error diagnosis. In specific cases, guidance from the hardware supported by appropriate "learning" algorithms can be used to significantly improve the error correction capabilities of the systems concerned. A major benefit is that the methods are also applicable to analog and switched-capacitor circuits with low hardware overhead. Application to wireless communications systems, digital signal processing and control algorithms will be discussed and test cases will be presented.
Approximate Computing: Beyond the Tyranny of Digital Abstractions
Speaker: Hadi Ismaelzadeh (Georgia Institute of Technology)
Abstract: As our Dark Silicon study shows, the benefits from continuous transistor scaling are diminishing due to energy and power constraints. Further, our results show that the current paradigm of general-purpose processors, multicore processors, will fall significantly short of historical trends of performance improvements in the next decade. These shortcomings may drastically curtail computing industry from continuously delivering new capabilities, the backbone of its economic ecosystem. To this end, radical departures from conventional approaches are necessary to provide continued performance and efficiency gains in computing. In this talk, I will present our work on general-purpose approximate computing across the system stack as one possible path forward. Specifically, I will talk about our hybrid analog-digital general-purpose processor that executes programs written in conventional languages. Our hybrid processor, leverage an approximate algorithmic transformation that converts regions of code from a Von Neumann model to a neural model bridging the two models of computing. I will also briefly discuss how we leverage the approximate nature of programs to tackle memory subsystem bottlenecks. Finally, I will present the abstraction that enable approximate hardware design and reuse while preserving design productivity. Our work shows significant gains in general-purpose computing when the abstraction of near-perfect accuracy is relaxed and opens new venues for research and development.

Back to the Technical Program