Floating Point Unit with High Precision Efficiency
Vasudeva G1, Bharathi Gururaj2
1Dr. Vasudeva G, Assistant Professor, Department of Electronics and Communication engineering, DSATM, Bangalore (Karnataka), India.
2Dr. Bharathi Gururaj, Associate Professor, Department of Electronics and Communication Engineering, KSIT, Bangalore (Karnataka), India.
Manuscript Received on 24 April 2025 | First Revised Manuscript Received on 28 April 2025 | Second Revised Manuscript Received on 06 May 2025 | Manuscript Accepted on 15 May 2025 | Manuscript published on 30 May 2025 | PP: 24-30 | Volume-15 Issue-2, May 2025 | Retrieval Number: 100.1/ijsce.B366915020525 | DOI: 10.35940/ijsce.B3669.15020525
Open Access | Editorial and Publishing Policies | Cite | Zenodo | OJS | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: In this paper, we dive into the design of a Single Precision Floating Point Unit (FPU), a key player in the world of modern processors. FPUs are essential for handling complex numerical calculations with high precision and a broad range, making them indispensable in areas like scientific research, graphics rendering, and machine learning. Our design centers around two main components: the Brent-Kung adder and the radix-4 Booth multiplier. The Brent-Kung adder is our go-to for fast addition and subtraction. Thanks to its clever parallel- prefix structure, it keeps delays minimal even as the numbers get bigger. For multiplication, we turn to the radix-4 Booth multiplier. This powerhouse streamlines the multiplication process by cutting down the number of partial products and operations needed, handling both positive and negative numbers with ease. By integrating these components, our FPU can handle floating-point arithmetic with great efficiency and reliability. In scientific computing, this means more accurate simulations and data analyses. For graphics processing, it translates to better image rendering and smoother visual effects. And in machine learning, it supports robust training and execution of algorithms on massive datasets, ensuring dependable model performance.
Keywords: Brent-Kung adder, Floating Point Unit, Radix4 Booth Multiplier, Single Precision, Verilog.
Scope of the Article: Data Communication