Why do we think of desktops and laptops as computers when smartphones are also general-purpose computers?

W

When we think of a computer, we usually think of a desktop or laptop, but smartphones are also general-purpose computers. The structure of a computer is divided into layers of hardware, operating system, and applications, and multitasking is handled in a time-sharing fashion. Linux’s CFS scheduler fairly distributes the central processing unit according to the priority of the program and manages the execution of programs efficiently.

 

When we say “computer”, the first thing that comes to mind is probably a desktop computer or laptop. These are categorized as general-purpose computers. A general-purpose computer is exactly what it sounds like: a computer that can be used for a variety of purposes. Surprisingly, many people don’t think of their smartphone as a computer, but it’s definitely a general-purpose computer. We use our smartphones to send emails, shop, chat, watch videos and music, create documents, and do countless other things. The reason we’re talking about general-purpose computers is to clarify the scope and terminology of what we’ll be talking about. Since we’re talking about general-purpose computers, which is what we usually think of when we say “computer,” we’ll refer to them as just computers.
If we look at the structure of a computer hierarchically, the bottom of the hierarchy is the hardware. All the hardware resources you need are at this hardware layer, but you can’t interact with it directly. This is because it would be too complicated and cumbersome to use the hardware resources directly. This is where the operating system sits. The operating system manages when, where, and how much hardware resources are allocated. You’ve probably heard the names of operating systems like Windows and Android before. But it’s still complicated and cumbersome for the average user to do what they want to do directly through the operating system. That’s why applications sit on top of the operating system. Users interact with these applications to perform the tasks they want to do.
With the exception of Microsoft’s DOS, virtually all operating systems support multitasking. Multitasking is the simultaneous execution of two or more programs on a single computer. It’s important to clarify what we mean by running a program. In the narrow sense, a program is running when it is allocated hardware resources, such as a central processing unit (CPU), to perform computational tasks. In the broader sense, a program is waiting, ready to be allocated CPU resources. While the narrow sense of execution is easy to understand, the broader sense of execution can be a bit more foreign. The broader concept of execution emerged from the following background
All multitasking is implemented in a time-sliced manner. Time-slicing means dividing up time. Since a single central processing unit cannot be allocated to multiple programs at the same time, the time is divided into segments, with certain segments allocated to certain programs and other segments to other programs, and so on. If the time is broken up finely enough, the user perceives the programs as if they were running simultaneously. This makes it appear as if there are multiple central processing units. From now on, we’ll refer to the narrower form of execution as verbatim execution and the broader form as time-sliced execution.
Multitasking doesn’t just mean watching a show on your computer and surfing the web at the same time. You can have any number of programs running in a time-shared manner without even realizing it. In fact, it’s impossible for a user to recognize all the programs running in a time-sharing manner on their computer. Even when you think you’re running one program, at least dozens to hundreds of other programs are already running in a time-sharing fashion. This is where the problem arises. The number of central processing units is usually limited because there can be dozens or even hundreds of programs that need these resources immediately. This is where the operating system steps in to allocate resources appropriately: it decides when, how much, and to which programs to allocate CPUs, which is called scheduling.
How do operating systems schedule so many programs? This varies from operating system to operating system, but we’ll use Linux as an example. Unlike Windows, Linux is freely distributed and all of its code is publicly available.
Linux uses a Completely Fair Scheduler (CFS) for program scheduling. The name is a bit deceiving, but here’s how it works. CFS categorizes programs into 40 classes based on priority, and each class is weighted according to its priority. It also records the total execution time of each program. Each time you assign a central processing unit to a specific program, you update the existing total execution time by adding the time that program has been using the central processing unit. Periodically, it swaps the running programs to assign the CPU to a different program. CFS ensures that all programs get a chance to use the CPU by prioritizing the programs with the lowest total execution time.
When it updates the total execution time of a program, it also adjusts the allocated time by taking into account the weighting based on priority, so that higher priority programs get slower time and lower priority programs get faster time. This is called virtual execution time, and it ensures that programs are fairly allocated resources based on their priority.
But can CFS hold up when the number of programs running in a time-shared manner reaches thousands or tens of thousands? Fortunately, CFS performs well enough in this situation. CFS uses a data structure called a red-black tree to manage programs. All of the programs waiting for the central processing unit are placed in the red-black tree, and periodically, a program is selected for execution, pulled out of the red-black tree, and put back in. Because of the nature of the red-black tree, selecting and running programs can be very fast. This is because even if the number of programs running in a time-sharing scheme grows exponentially, the time it takes to do this only grows linearly.
Of course, the scheduler doesn’t solve all problems. For example, in order for a program to run, the data associated with it must be in the main memory (RAM), so the capacity of the main memory can limit the number of programs that can run. The performance of the central processing unit itself is also a factor that limits the number of programs that can run. However, these issues are outside the scope of the scheduler, so CFS is a very good scheduler in terms of fairness and performance.

 

About the author

Blogger

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it's K-pop, Korean movies, dramas, travel, or anything else. Let's explore and enjoy Korean culture together!

About the blog owner

Hello! Welcome to Polyglottist. This blog is for anyone who loves Korean culture, whether it’s K-pop, Korean movies, dramas, travel, or anything else. Let’s explore and enjoy Korean culture together!