IT: Basic Fundamental Concepts of the Computer Hardware:

Computers have evolved from simple devices to a 21st-century staple in just a few decades. But despite being a relatively new technology, most people have no idea how a simple rectangle — small enough to fit your backpack — is capable of everything from complex math to playing video, audio, and running sophisticated software.

Photo by Artiom Vallat on Unsplash

Hello, Dear Readers and followers, welcome back to my new story, I hope all’s well. Today I planned to talk about some basic fundamental concepts of computer hardware.

A computer is a programmable electronic device capable of processing information. Composed of hardware and software, computers operate on two levels: they receive data through an input route, either live or through a digital storage unit, and send out an output.

Modern computers shouldn’t be mixed up with the outdated job occupation of computers in the 19th century. While they both perform long and tedious mathematical calculations and information processing, one is a person, and the other is a machine.

How Computers Work: Binary & Data

Computers don’t understand words or numbers the way humans do. The Computer system understands only the Binary Number System. And therefore, we need to communicate with the computer only using Binary Code

The binary code is also alternately referred to as machine language or machine code. In a binary number system, all the numbers are represented by using only two numeric numbers that are either 0 ( zero ) or 1 ( one ).

Computers use binary — the digits 0 and 1 — to store data. A binary digit, or bit, is the smallest unit of data in computing. It is represented by a 0 or a 1. Binary numbers are made up of binary digits (bits), eg the binary number 1001.

The circuits in a computer’s processor are made up of billions of transistors. A transistor is a tiny switch that is activated by the electronic signals it receives. The digits 1 and 0 used in binary reflect the on and off states of a transistor.

Computer programs are sets of instructions. Each instruction is translated into machine code — simple binary codes that activate the CPU. Programmers write computer code and this is converted by a translator into binary instructions that the processor can execute.

All software, music, documents, and any other information that is processed by a computer, is also stored using binary.

For more information see:

How Computers Work: Hardware and Software

It’s a simple question, but one that everyone ponders from time to time: how does that computer in front of you actually work?

Hardware is any physical device or equipment used in or with a computer system (anything you can see and touch). Software — a set of instructions or programs that tells a computer what to do or how to perform a specific task (computer software runs on hardware). The main types of software — are systems software and application software.

32-bit vs. 64-bit hardware and software systems

There are two types of processors existing in computing- the 32-bit and 64-bit processors. The type of processor used in an operating system (OS) tells us how much memory it can access from the CPU register. There is a range of difference between 32-bit and 64-bit operating systems.

  • 32-bit system — It can ideally access about 232 memory addresses. It is equal to 4 GB (gigabytes) of physical memory or RAM. It can also access more than 4 GB of physical memory, but not very efficiently.
  • 64-bit system — It can ideally access about 264 memory addresses. It is equal to 18 Quintillion Bytes of physical memory or RAM. In short, it can handle any amount of memory that is greater than 4 GB easily.

For more information see:

Computer software

Application software

a computer program that provides users with tools to accomplish a specific task.
Examples of application software: word processing, spreadsheets, presentation, database management, Internet browsers, email programs, media players, accounting, pronunciation, translation, desktop publishing, enterprise, etc

System Software

it is designed to run a computer’s hardware and application software, and make the computer system available for use. It serves as the interface between hardware, application software, and the user.

  • Main functions of system software — allocating system resources, managing storage space, storing and retrieval of files, providing security, etc.
  • Main types of systems software — operating system, device driver, utility software, programming software, etc.


Firmware is where the line between hardware and software gets blurry. It’s software that’s physically etched into a piece of hardware.

Firmware is the first thing that kickstarts your computer when you switch it on, a simple program that instructs your computer to launch the OS. Without it, your computer won’t launch your operating system or other components, and you’ll be stuck with hardware that you can’t communicate with.

Operating System (OS)

An operating system is a piece of software that manages your computer’s hardware and software resources. Similarly, without an OS, you can’t communicate with your computer even using your input devices.

Device driver

a software program that is designed to control a particular hardware device that is attached to a computer.

  • The main purpose of device driver — it acts as a translator between the hardware device and operating systems or applications that use it.
  • It instructs computer on how to communicate with the device by translating the operating system’s instructions into a language that a device can understand in order to perform the necessary task.
  • Examples of device driver: printer driver, display driver, USB driver, sound card driver, motherboard driver, ROM driver, etc.

Utility software

a type of system software that helps set up, analyze, configure, strengthen, maintain a computer and performs a very specific task (e.g. antivirus software, backup software, memory tester, screen saver, etc.).

CPU, Memory, Input & Output

A computer processes the input to produce the desired output, but how does a machine outperform the human brain?

Conventional computers don’t try to mimic the human brain. Instead, they run commands sequentially, with data constantly moving from input and memory to the device’s processor. Neuromorphic computers, on the other hand, process data at the same time, making them faster, more energy-efficient, and closer to the structure of the human brain.

Overall, a computer works in four steps:

  1. Input: Input is the data before processing. It comes from the mouse, keyboard, microphone, and other external sensors.
  2. Storage: The storage is how the computer retains input data. The hard drive is used for long-term and mass data storage while the data set for immediate processing is stored temporarily in the Random Access Memory (RAM).
  3. Processing: Processing is where input gets transformed into output. The computer’s Central Processing Unit (CPU) is its brain. It’s responsible for executing instructions and performing mathematical operations on the input data.
  4. Output: Output is the final result of data processing. It can be anything from images, video, or audio content, even the words you type using a keyboard. You can also receive the output through a printer or a projector instead of directly through your device.

The Hardware Components of a Computer

The hardware components are everything you can physically touch and see in a computer, including all the input and output devices. Hardware is also the physical processing parts like storage, CPU, graphics card, sound card, RAM, and motherboard, all of which play an important role in contemporary computers.


The motherboard is the primary communication hub between the computer’s hardware components. It’s the main circuit where everything needs to physically connect — except for ones relying on Bluetooth or Wi-Fi. Without a motherboard, it’s impossible to have a functioning modern computer.

For more information see:

The Central Processing Unit (CPU)

The Central Processing Unit (CPU), also known as a processor, is the brain of the computer and is thus its most important component, containing all the circuitry needed to process input, store data, and output results.

The CPU is the unit that performs most of the processing inside a computer. It processes all instructions received by software running on the PC and by other hardware components and acts as a powerful calculator.

The CPU is placed into a specific square-shaped socket found on all motherboards by inserting its metallic connectors or pins found on the underside. Each socket is built with a specific pin layout to support only a specific type of processor.

Since modern CPUs produce a lot of heat and are prone to overheating, they must be kept cool with appropriate fans or ventilation systems and covered with heat sinks and thermal paste.

To control instructions and data flow to and from other parts of the computer, the CPU relies heavily on a chipset, which is a group of microchips located on the motherboard.

CPU itself has the following three components.

  1. Memory or Storage Unit
  2. Control Unit
  3. ALU(Arithmetic Logic Unit)

The CPU’s main function is to take input from a peripheral (keyboard, mouse, printer, etc) or computer program, and interpret what it needs. The CPU then either outputs information to your monitor or performs the peripheral’s requested task.

For more information see:

What Is nm in a CPU?

nm stands for nanometer, a unit of measure for length. 1nm is equal to 0.000000001 meters — which is absolutely minute. In a CPU, nm is used to measure the size of the transistors that make up a processor.

There are billions of transistors in a CPU that perform calculations through electrical signals by switching on and off. In technological terms, a processor’s nm is also referred to as a process node or technology process, or just node. It is the manufacturing process and design of a CPU made through lithography.

So, The nm of a CPU means a more efficient processing unit. While considering all the other specs of a phone or a PC, you should take into account the nm manufacturing process of the CPU as well.

CPU Cache

A CPU memory cache is just a really fast type of memory. In the early days of computing, processor speed and memory speed were low. However, during the 1980s, processor speeds began to increase — rapidly. The system memory at the time (RAM) couldn’t cope with or match the increasing CPU speeds, so a new type of ultra-fast memory was born: CPU cache memory.

Cache memory design is always evolving, especially as memory gets cheaper, faster, and denser. For example, one of AMD’s most recent innovations is Smart Access Memory and the Infinity Cache, both of which increase computer performance.

AMD or Intel? Why You Only Have Two Choices When It Comes to Processors

When you buy a PC or laptop, it has one of two brand names on the side: Intel or AMD. So, how come you can only pick between Intel and AMD when it comes to processors (CPUs)?

Let’s take a deep dive into the history of the x86 processor and find out how we ended up with Intel vs. AMD as the only option.

For more information see:

Intel CPUs

The cores in your computer’s CPU have evolved at a steady rate over the years. We first had single-core CPUs, but that quickly evolved to multithreading, and from there, multi-core setups, starting with dual-core designs before launching into quad-core, octa-core, and further.

Intel’s 12th Gen CPUs treated us to an unexpected yet pleasing twist: two different kinds of cores in one CPU package: E-Cores and P-Cores.

For more information see:


AMD does a pretty good job differentiating each of its CPU offerings on desktops. . They’re plenty powerful despite them being AMD’s most efficient solution, and they’ll be reliable on whatever PC you get.

Despite the fact AMD has multiple laptop lineups, U remains the best in terms of price-to-performance. You’ll find it on cheaper computers, and it provides great performance to everyone.

For more information see:

Is AMD or Intel Better for Gaming

When it comes to desktop CPUs, there are two names in town: Intel and AMD. These are the desktop processor behemoths that dominate the market. When you want to build or buy a new gaming PC, it’ll have an Intel or AMD CPU powering your games.

The desktop CPU market dynamic has shifted in recent years, too. For a long time, AMD processors were only good for entry-level or budget options. The introduction of AMD Ryzen CPUs changed that perception drastically, with AMD’s Zen architecture creating substantial competition against Intel’s stranglehold on the market.

So much so that AMD Ryzen CPUs dominate the CPU market, at least for the most recent CPU generations. Mindfactory regularly releases up-to-date CPU sales figures, illustrating a decent snapshot of the market, as per the below image.

For more information see:

All of these factors come together to make CPU comparisons a difficult proposition. How do you know which one you should buy? Here are a few tips that may help.

For more information see:

The Differences Between Core i7, Core i5, and Core i3:

The processor is the brain of a computer, but understanding the difference between processors requires a lot of brainpower of your own. It’s time to demystify that. Read on to learn about the difference between an Intel Core i5 and a Core i7, if a Core i3 is any good, and whether you should buy an Intel Core i9.

An Intel Core i7 is better than a Core i5, which in turn is better than a Core i3. The trouble is knowing what to expect within each tier. Things go a little deeper.

First, Core i7 does not mean a seven-core processor! These are just names to indicate relative performance.

The older Intel Core i3 series had only dual-core processors, but more recent generations have a mixture of dual- and quad-core CPUs.

It is a similar story for older Intel Core i5 CPUs. Older generations of Intel Core i5 processors had a mixture of dual- and quad-core processors, but the later generations typically feature a quad- or even Hexa-core (six) configuration, along with faster overclock speeds than the Core i3. The latest i5 generation includes 10-core CPUs.

The latest Intel Core i7 CPU generations include quad-core, Hexa-core, octa-core, and 12-core configurations. Again, the Intel Core i7 CPUs outperform their Core i5 counterparts and are much faster than the entry-level Core i3 CPUs.

Quad-cores are usually better than dual-cores and Hexa-cores better than quad-cores, and so on, but it isn’t always accurate depending on the CPU generation — more on these differences in a moment.

Intel releases “families” of chipsets, called generations. At the time of writing, Intel has launched its 12th-generation CPUs, named Alder Lake. Each family, in turn, has its own line of Core i3, Core i5, and Core i7 series of processors. The latest CPU generations have another tier above the Core i7, the Intel Core i9.

The Intel Core i9 series is Intel’s extreme performance line. Most Core i9 CPUs are now 16-core beasts (doubling the octa-configuration of the previous generation) and come with a very high clock speed, enabling them to perform to a very high standard for prolonged periods. They may also come with a larger CPU memory cache than their counterparts, enabling faster overall performance.

Intel’s 12th Gen CPUs added another consideration for would-be users: Hybrid Cores. However, with the 12th generation Intel Core, we now get different Performance and Efficiency cores. What this does is that the processor uses Performance Cores (P-Cores) for priority apps running in the foreground while the Efficiency Cores (E-Cores) are used for background tasks. For example, when you’re gaming, the P-Cores will handle your game while the E-Cores will work on background tasks, like your streaming app. Similarly, P-Cores are best used for single-thread and lightly-threaded tasks, like games and productivity apps, while it designates highly-threaded apps to the E-Cores. This ensures that your computer makes efficient use of the available processor power.

Now while this doesn’t make a huge amount of difference when you’re choosing between an i3, i5, or i7 CPU, each of these CPU types comes with a different Hybrid Core configuration.

The physical cores largely determine the speed of a processor. But with how modern CPUs work, you can get a boost in speed with virtual cores, activated through hyper-threading.

In layman’s terms, hyper-threading allows a single physical core to act as two virtual cores, thus performing multiple tasks simultaneously without activating the second physical core (which would require more power from the system).

If both processors are active and using hyper-threading, those four virtual cores will compute faster. However, do note that physical cores are faster than virtual cores. A quad-core CPU will perform much better than a dual-core CPU with hyper-threading!

The difficulty is that there is no blanket approach from Intel regarding hyper-threading on its CPUs. For a long time, only Intel i7 CPUs featured hyper-threading, with a few Intel Core i3 CPUs but no Intel Core i5 CPUs. That situation changed with Intel’s 10th Gen CPUs, with some Core i5 processors launching with hyper-threading, but prior to this, Intel disabled hyper-threading on some of its Intel Core i7 9th Gen CPUs in response to security risks.

In short, you’ll have to check the individual CPU for its hyper-threading potential, as Intel appears to chop and change with each processor generation.

All of the latest Intel Core processors now support Turbo Boost frequencies. Previously, Intel Core i3 owners were left out in the dark, forced to suffer with their regular CPU speeds. However, as of the Intel Core i3–8130U, the CPU manufacturer began adding higher frequency modes to the entry-level CPU series.

Of course, Core i5, Core i7, and Core i9 CPUs all feature Turbo Boost, too.

Turbo Boost is Intel’s proprietary technology to intelligently increase a processor’s clock speed if the application demands it. So, for example, if you are playing a game and your system requires some extra horsepower, Turbo Boost will kick in to compensate.

Turbo Boost is useful for those who run resource-intensive software like video editors or video games, but it doesn’t have much of an effect if you’re just browsing the web and using Microsoft Office.

Graphics Processing Unit

The Graphics Processing Unit (GPU) is dedicated to processing visual imagery on your computer. Powerful GPUs are essential to rendering high-quality images and graphics or playing video games. Graphics processing can either be its own components or be integrated with the CPU.

Intel Graphics: Xe, HD, UHD, Iris, Iris Pro, or Plus

Ever since graphics were integrated into the processor chip, integrated graphics have become an important decision point in buying CPUs. But as with everything else, Intel has made the system a little confusing.

Intel Graphics Technology is the umbrella term covering all Intel integrated graphics. Within that, there are different generations of Intel integrated graphics technology, confusingly referred to by both series names and generational names. Still, following?

  • Intel HD Graphics was first introduced in 2010 as the first series under this umbrella but is actually Gen5 (5th generation) in terms of development.
  • Intel Iris Graphics and Intel Iris Pro Graphics were introduced in 2013 and are Gen7 integrated graphics units. The Iris Pro Graphics units were pretty big news at the time as they integrated DRAM into the module, giving the graphics performance an extra boost.
  • Intel UHD Graphics launched with Intel’s 10th Generation mobile CPUs and is only available on certain laptop model processors.
  • Intel Xe (known as Gen12 integrated graphic) was a massive step forwards for integrated graphics, using a new architecture to deliver much higher integrated graphics performance than previous generations. Adding to the confusion, some of the Intel UHD Graphics models use Intel Xe architecture, further muddying the water.

The best advice for how to interpret these? Just don’t. Instead, rely on Intel’s naming system. If the processor’s model ends with HK, you know it’s a model with high graphics performance and an unlocked CPU. If it ends with a G, that means there is a dedicated GPU, not one of Intel’s chips.

Choosing Between Intel Cores i3 vs. i5 vs. i7 vs. i9

Generally speaking, here’s who each processor type is best for:

  • Intel Core i3: Basic users. Economic choice. Good for browsing the web, using Microsoft Office, making video calls, and social networking. Not for gamers or professionals.
  • Intel Core i5: Intermediate users. Those who want a balance between performance and price. Good for gaming if you buy a G processor or a Q processor with a dedicated graphics processor.
  • Intel Core i7: Power users. You multi-task with several windows open at the same time, you run apps that require a lot of horsepowers, and you hate waiting for anything to load.
  • Intel Core i9: The extreme performance tier is marketed for those that demand the best and fastest performance in every area of their machine.

Memory & Storage:

Computers have two fundamental types of memory — main memory and secondary storage. Main memory is where data and instructions are stored so that they can be accessed directly by the CPU. Secondary storage is used to permanently store data such as the operating system and the user’s files.

Computer memory/storage can be classified in three ways; primary, secondary, and off-line.

1. Primary Memory/Storage

Primary memory is the computer’s main memory, which is directly accessible by the CPU and often much faster than secondary storage.

RAM will hold the loaded operating system, plus all running applications and files.

Examples of primary memory/storage:

  1. Random Access Memory (RAM) — solid state
  2. Read Only Memory (ROM) — solid state

Difference between Volatile Memory and Non-Volatile Memory

1. Volatile Memory:

It is the memory hardware that fetches/stores data at a high-speed. It is also referred as temporary memory. The data within the volatile memory is stored till the system is capable of, but once the system is turned off the data within the volatile memory is deleted automatically. RAM (Random Access Memory) and Cache Memory are some common examples of volatile memory. Here, data fetch/store is fast and economical.

2. Non-Volatile Memory:

It is the type of memory in which data or information is not lost within the memory even power is shut-down. ROM (Read Only Memory) is the most common example of non-volatile memory. It’s not economical and slow in fetch/store as compared to volatile memory however stores higher volume of data. All such information that needs to be stored for an extended amount of time is stored in non-volatile memory. Non-volatile memory has a huge impact on a system’s storage capacity.

For more information see:

1. RAM (Random Access Memory)

RAM stands for random access memory, and it’s one of the most fundamental elements of computing. RAM is volatile memory that temporarily stores the files you are working on.

RAM is a temporary memory bank where your computer stores data it needs to retrieve quickly. RAM keeps data easily accessible so your processor can quickly find it without having to go into long-term storage to complete immediate processing tasks.

Every computing device has RAM, whether it’s a desktop computer (running Windows, MacOS, or Linux), a tablet or smartphone (running Android or iOS), or even an IoT computing device (like a smart TV). Nearly all computers have a way of storing information for longer-term access, too. But the memory needed to run the process you’re currently working on is stored and accessed in your computer’s RAM.

RAM is a form of temporary storage that gets wiped when you turn your computer off. RAM offers lightning-fast data access, which makes it ideal for the processes, apps, and programs your computer is actively working on, such as the data needed to surf the internet through your web browser.

To understand RAM, let’s use the analogy of a physical desk. Your working space is the top of the desk. That’s where you keep everything you frequently use within arm’s reach, so you won’t waste time searching through your drawers. By contrast, anything you don’t use that much or that you want to save for later goes into a desk drawer.

On your computer, your RAM is like the top of your desk, where you keep everything you need quick access to. And the data that you don’t use much or want to save for later is stored on a hard disk, either locally in your device or in the cloud.

For more information see:

Types of RAM:

  1. Static RAM, or (SRAM) which stores a bit of data using the state of a six transistor memory cell.
  2. Dynamic RAM, or (DRAM) which stores a bit data using a pair of transistor and capacitor which constitute a DRAM memory cell.

For more information see:

2. Read Only Memory (ROM)

ROM is a non-volatile memory that permanently stores instructions for your computer. Read-only memory (ROM) is a type of storage medium that permanently stores data on personal computers (PCs) and other electronic devices.

It contains the programming needed to start a PC, which is essential for boot-up; it performs major input/output tasks and holds programs or software instructions. This type of memory is often referred to as “firmware” — how it is altered has been a source of design consideration throughout the evolution of the modern computer.

Types of ROM:

  1. Programmable ROM, where the data is written after the memory chip has been created. It is non-volatile.
  2. Erasable Programmable ROM, where the data on this non-volatile memory chip can be erased by exposing it to high-intensity UV light.
  3. Electrically Erasable Programmable ROM, where the data on this non-volatile memory chip can be electrically erased using field electron emission.
  4. Mask ROM, in which the data is written during the manufacturing of the memory chip.

What is the difference between RAM and ROM?

The difference between RAM (Random Access Memory) and ROM (Read Only Memory) is explained here in detail. RAM is a form of computer memory that can be read and changed in any order, typically used to store working data and machine code.

ROM is a type of non-volatile memory used in computers and other electronic devices. The difference between RAM vs ROM will help in understanding the basics better and knowing their comparisons thoroughly.

For more information see:

Virtual Memory

Virtual memory is a feature of an operating system that uses hardware and software to compensate for shortages of physical memory. It transfers pages of data from random access memory (RAM) to disk storage. Microsoft compares this process to how a “movie ticket serves as a controlling agent between the demand and the seats in a theatre”. It’s a process that is available on Windows, MacOS, Android and iOS.

For more information see:

2. Secondary Memory/Storage

Secondary storage is a non-volatile medium that holds data until it is deleted or overwritten.

It is sometimes referred to as external memory and auxiliary storage. Secondary storage is where programs and data are kept on a long-term basis.

Examples of secondary memory/storage:

  1. Hard Disk Drive (HDD) — magnetic storage
  2. Solid State Drive (SSD) — solid state

For more information see:

3. Off-line Memory/Storage

Off-line refers to non-volatile storage that can be easily removed from the computer. This is often used to transport data and keep backups for protection.

Examples of off-line memory/storage:

  1. CD, DVD, Blu-ray — optical storage
  2. USB Flash Drive — solid state
  3. Removable HDD / SSD

Files & File Systems

We use so many different file types such as text files, image files, and music files in our daily life.

A file is a collection of correlated information which is recorded on secondary or non-volatile storage like magnetic disks, optical disks, and tapes. It is a method of data collection that is used as a medium for giving input and receiving output from that program.

In general, A file is a series of bits, bytes, or records. Its meaning is defined by the author and user of the file. Every file has a logical place in which it is stored or retrieved. Data inside the file is somehow organized and we call it a file format. We can create our own format as well, however, it is easiest and best to use an existing standard such as JPEG, PNG, and TXT. Basically, a file contains metadata and payload. Let’s look at the bitmap (BMP) format, and how its metadata works in the next section.

A filesystem is the set of methods and data structures the operating system employs to keep track of files on a disk or partition. With the use of a filesystem, data placed in a storage unit can be interpreted by the operating system. Other than that, it’s just one large part of data that we don’t know where it begins and ends.

There are so many different kinds of file systems. Let’s look at some types of file systems.

For more information see:

Computer Monitors / Display Screen:

Monitors are a major contributor to the operation of desktop computers. We may not even realize it, but there are several things we need to learn about them.

A monitor is an electronic output device that is also known as a video display terminal (VDT) or a video display unit (VDU). A monitor is the primary output hardware of a computer. It displays pictures, text, and videos in real-time, allowing the user to interact with the computer.

Although it is almost like a TV, its resolution is much higher than a TV. The first computer monitor was introduced on 1 March 1973, which was part of the Xerox Alto computer system.

It comes with a screen, a power supply, a button to adjust screen settings, and a casing that holds these components. But these are not the only features of a monitor.

Most desktop displays use liquid crystal display (LCD) or cathode ray tube (CRT) technology, while nearly all portable computing devices such as laptops incorporate LCD technology. Because of their slimmer design and lower energy consumption, monitors using LCD technology (also called flat panel or flat screen displays) are replacing the venerable CRT on most desktops.

Resolution refers to the number of individual dots of color, known as pixels, contained on a display. Resolution is expressed by identifying the number of pixels on the horizontal axis (rows) and the number on the vertical axis (columns), such as 800x600. Resolution is affected by a number of factors, including the size of the screen.

As monitor sizes have increased over the years, display standards and resolutions have changed.

Common Display Standards and Resolutions

XGA (Extended Graphics Array) = 1024x768
SXGA (Super XGA) = 1280x1024
UXGA (Ultra XGA) = 1600x1200
QXGA (Quad XGA) = 2048x1536
WXGA (Wide XGA) = 1280x800
WSXGA+ (Wide SXGA plus) = 1680x1050
WUXGA (Wide Ultra XGA) = 1920x1200
WQHD = 2560 x 1440
WQXGA = 2560 x 1600
QSXGA = 2560 x 2048

In addition to the screen size, display standards and resolutions are related to something called the aspect ratio. Next, we’ll discuss what an aspect ratio is and how screen size is measured.

For more information see:

In today’s digital world we are very have seen different types of monitors. We spend most of our time sitting in front of many types of monitors, like playing games, watching movies, and many other things.

A good display can be very effective in the user experience. The properties of display devices have also improved a lot due to the innovation in Display Technologies.

For more information see:

The Video Electronics Standards Association (VESA)

Although the group is now known as VESA, it was officially registered in 1989 as the Video Electronics Standards Association in California. According to this 1988 issue of Info World, NEC Home Electronics created the association to promote the 800x600 pixel Super VGA resolution among companies. This standard was meant to supplant the 640x480 pixel VGA standard IBM introduced in 1987.

Today, the group holds several industry standards for personal and workstation computers, as well as other consumer electronics. It has a membership of over 300 companies, including AMD, Apple, Google, Intel, LG, NVIDIA, Oculus, Qualcomm, Samsung, Valve, and more.

Although the group holds several standards and specifications, these are some of its most used standards.

For more information see:

32:9 vs. 21:9 Ultrawide Monitors

In recent years, ultrawide monitors have become essential purchases for people looking to play games on their PCs. The typical definition of ultrawide varied over the years, but more recently, it has settled on one single aspect ratio: 21:9. It’s considerably wider than 16:9 and lets you see more when you play games.

However, a new ultrawide ratio has emerged: 32:9. It’s still in its infancy, and it’s relatively expensive, but it’s seeing wider (get it!) use these days.

For more information see:


With HDR increasing in popularity, you’re probably wondering how it compares to the original SDR standard. HDR technology is now so widespread that popular streaming services such as Amazon Prime, Disney+, and Netflix have started to support HDR content. In fact, if you were to look for a new TV or monitor today, you’d be surprised how almost every product boasts HDR on its spec list.

Which begs the question: what is HDR exactly? How does HDR work, and how does it compare to regular SDR?

The Standard Dynamic Range (SDR) is a video standard that has been in use since CRT monitors. Despite the market success of HDR screen technology, SDR is still the default format used in TVs, monitors, and projectors. Although it was used in old CRT monitors (and is actually hampered by the limitations of CRT technology), SDR is still an acceptable format today. In fact, the vast majority of video content, whether games, movies, or YouTube videos, still uses SDR. Basically, if the device or content is not rated as HDR, you’re probably using SDR.

High Dynamic Range (HDR) is the newer standard in images and videos. HDR first became popular among photographers wanting to properly expose a composition with two subjects having a 13-stop difference in exposure value. Such a wide dynamic range would allow proper exposure to real-life scenes that previously weren’t possible with SDR.

For more information see:

Brightness Nits

Nit is a unit of measurement that refers to the amount of light that is emitted from a surface or object. In other term, A nit is a unit of measurement that equals one candela per square meter. Candela, in turn, is the unit measurement for light intensity. In other words, a nit is how bright a screen appears to one human eye. The word “nit” comes from the Latin word “nitere” which means “to shine.”

Nits are important for a variety of reasons. For one, they help us to understand the true brightness of a display. Most displays are not capable of reaching the full range of human color vision, so nits provide a way to measure the amount of light that is actually reaching our eyes.

Another reason nits are important is that they can help us to compare the brightness of different displays. When two displays have the same nit value, we know that they will appear equally bright to our eyes. This is helpful when we are choosing between different display options, or when we want to make sure that a display is set to its optimal brightness.

Finally, nits can help us to understand the relationship between brightness and power consumption. Displays that are brighter require more power to operate, so nits can help us to compare the efficiency of different displays.

In short, nits are an important tool for understanding and comparing the brightness of displays. This way, we can ensure that our displays are set to the correct brightness and that we are making efficient choices when it comes to power consumption.

Most monitors, TVs, and phones on the market today range from 250 to 600 nits. Some high-end models can go up to 1,000 nits or more.

But what is the ideal brightness for a screen? That really depends on your needs and preferences. If you’re using your device in a bright room, you’ll want a higher nit count so the screen is still visible. If you’re in a dark room, a lower nit count may be just fine.

Some people prefer a brighter screen because it gives colors more pop and makes text easier to read. Others find that too much brightness can be harsh on the eyes.

Ultimately, it’s up to you to decide what nit level is ideal for you. experimentation is key. When shopping, make sure to test different options and see what works best for you. Take your time and look at the available options. Don’t rush into a decision so that you don’t end up with something you’re not happy with.

For more information see:

Final Thoughts;

I start my final thoughts with this quote “ Knowledge is a vast Ocean. More the Merrier”. Most different IT experts do not know some basic concepts about computer hardware. it is also important to have a little bit of knowledge about different things in computer science like Computer hardware, networks, and the internet.

Anyway, it's my own personal opinion.



Writer | network engineer | Traveler | Biker | Polyglot. I’m so deep even the ocean gets jealous

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Raja Muhammad Mustansar Javaid

Writer | network engineer | Traveler | Biker | Polyglot. I’m so deep even the ocean gets jealous