A researcher claims to have discovered a novel approach that could potentially double the speed of computers without any additional hardware costs.
The method, called Simultaneous Heterogeneous Multithreading (SHMT), was described in a paper co-authored by UC Riverside associate professor of electrical and computer engineering Hung-Wei Tseng and computer science graduate student Kuan-Chieh Hsu.
The SHMT framework currently operates on an embedded system platform that simultaneously uses a multi-core ARM processor, an NVIDIA GPU, and a Tensor processing unit hardware accelerator. In testing, the system achieved a 1.96x speedup and a 51% reduction in power consumption.
Energy reduction
Tseng explained that modern computing devices increasingly integrate GPUs, hardware accelerators for AI and ML or DSP units as essential components. However, these components process information separately, creating a bottleneck. The SHMT seeks to address this problem by allowing these components to operate simultaneously, thereby increasing processing efficiency.
The implications of this discovery are significant. Not only could it reduce computer hardware costs, but it could also decrease carbon emissions from producing the energy needed to run servers in large data processing centers. Additionally, it could reduce the demand for water used to cool servers.
Tseng told us that the SHMT framework, if adopted by Microsoft in a future version of Windows, could provide a free performance boost for users. The energy saving claim of the research is based on the idea that by shortening the execution time, less energy is consumed, even when using the same hardware.
However, there is a problem (isn't there always one?). Tseng's article warns that more research is needed to address questions about system implementation, hardware support, code optimization, and which applications will benefit most.
Although no hardware engineering efforts are necessary, Tseng says that “we definitely need re-engineering of the runtime system (e.g. OS drivers) and programming languages (e.g. Tensorflow/PyTorch)” to make it work. .
The paper, presented at the 56th Annual IEEE/ACM International Symposium on Microarchitecture in Toronto, Canada, was recognized by the Institute of Electrical and Electronics Engineers (IEEE), who selected it as one of 12 papers included in its “Top Picks from the Computer Architecture Conferences” will be published later this year.