Types of Yarn Schedulers

Explore the diverse world of yarn schedulers, as we delve into their unique functionalities and how they efficiently manage resources in big data processing.

Are you tired of constantly losing track of your yarn projects? Do you find it difficult to keep track of which yarns you’ve used and how much is left? If so, then it’s time to consider using a yarn scheduler! Yarn schedulers are an essential tool for any crocheter or knitter who wants to stay organized and on top of their projects. In this article, we’ll be discussing the different types of yarn schedulers available on the market today, so you can find the one that best suits your needs.

So grab your hooks and needles, and let’s dive into the world of yarn scheduling!

FIFO Scheduler

types of yarn schedulers

The First-In-First-Out (FIFO) scheduler is the simplest and most basic type of yarn scheduler. It works by processing jobs in the order they are received, with no regard for their priority or resource requirements.

This means that if a large job comes in before a smaller one, it will be processed first, regardless of how long it takes to complete.

While this may seem like an inefficient way to manage resources, FIFO scheduling can be useful in certain situations where fairness is more important than efficiency. For example, if you have multiple users submitting jobs at the same time and want to ensure that each user’s job gets processed fairly without any bias towards larger or more complex tasks.

FIFO scheduling is best suited for small clusters with low utilization rates where there isn’t much competition for resources.

Capacity Scheduler

capacity scheduler

It works by dividing the cluster’s total capacity into several queues, each with its own set of resource limits and priority levels. This ensures that high-priority jobs get access to the necessary resources first, while lower-priority jobs are queued until sufficient resources become available.

One of the key benefits of Capacity Scheduler is its ability to support multiple tenants or applications running simultaneously on a shared cluster. Each tenant can have their own queue with specific resource allocations and priorities, ensuring fair distribution across all users.

Another advantage is its hierarchical design which enables administrators to manage large clusters more efficiently by grouping queues together under parent queues for better organization and control over resource allocation.

Fair Scheduler

fair scheduler

It works by dividing resources equally among all users, regardless of their priority or the size of their job. This means that every user gets an equal share of the available resources, which helps to prevent any one user from monopolizing them.

One advantage of using the Fair Scheduler is that it can help to reduce wait times for smaller jobs. Since larger jobs are not given priority over smaller ones, they cannot hold up other users’ work indefinitely.

Another benefit is its simplicity and ease-of-use; it requires minimal configuration and can be set up quickly without much technical expertise.

However, there are some limitations to consider when using this scheduler type.

Capacity and Hierarchical Design

capacity and hierarchical design

This means that each user or application is assigned a certain amount of resources, which they can use as needed. The hierarchical design aspect comes into play when there are multiple levels or tiers within an organization, with different users having varying levels of access to resources.

For example, in a large knitting company with multiple departments such as design, production and sales; the Capacity and Hierarchical Design scheduler would allow for each department to have its own set amount of yarn allocated based on their needs. Within each department there could be further subdivisions where individual employees are given specific amounts depending on their role.

This type of scheduling ensures fair distribution while also allowing for flexibility in resource allocation across various departments within an organization. It’s important to note that this approach requires careful planning and management so that all users receive adequate access without overloading any one area at the expense others.

Minimum User Percentage and User Limit Factor

minimum user percentage and user limit factor

The minimum user percentage is a setting that ensures each individual user or group of users has access to a certain amount of resources, even during peak usage times. This helps prevent any one person or group from monopolizing all available resources.

The second factor is the user limit factor which sets an upper bound on how much resource allocation can be given to any single application submitted by a particular set of users. This ensures fairness in resource allocation across different applications and prevents overuse by specific groups.

By using these settings together with other scheduling policies such as queue ordering policies and priority levels, yarn schedulers can effectively manage large amounts of data processing while ensuring fair distribution among all users involved in the process.


With archetypes, you can specify the resources required by each application type and ensure that they get allocated appropriately. This is particularly useful in large-scale environments where there may be many different types of applications running simultaneously.

For example, if you have a mix of long-running batch jobs and short-lived interactive sessions, you might want to create two separate archetypes with different resource requirements for each type. By doing so, the scheduler can allocate resources more efficiently based on the specific needs of each archetype.

In addition to defining resource requirements for individual archetypes, yarn schedulers also allow users to set up rules governing how these resources are allocated across multiple queues or clusters. This level of control ensures that your system runs smoothly even under heavy load conditions.

Container Churn

This can happen when an application finishes its work, or when it needs more resources than are currently available. Container churn can have a significant impact on the performance of your system, as it requires additional time and resources to create new containers.

To minimize container churn, you should consider using preemption policies that allow for better resource utilization by preempting lower-priority applications in favor of higher-priority ones. You may want to adjust your queue ordering policies so that high priority jobs get scheduled first.

CPU Scheduling (Dominant Resource Fairness)

Dominant Resource Fairness (DRF) is a popular algorithm used for CPU scheduling in Hadoop YARN. DRF ensures that each application gets its fair share of resources based on its dominant resource needs, which can be either memory or CPU.

The DRF algorithm works by calculating the dominant share and then allocating resources accordingly to each application. This means that applications with higher demands for a particular resource will get more allocation than those with lower demands.


This means that if a high-priority job enters the queue, it can preempt lower priority jobs and take over their resources. Preemption ensures that critical workloads are completed on time, even in situations where there is resource contention.

For example, let’s say you have two jobs running on your cluster: Job A with low priority and Job B with high priority. If both jobs require more resources than are currently available, then YARN will preempt some of Job A’s allocated resources (CPU or memory) so they can be used by Job B instead.

Preemption helps ensure fairness in resource allocation while also ensuring important workloads get done first. It prevents long-running or low-priority tasks from hogging valuable cluster resources when other more important tasks need them urgently.

Queue Ordering Policies

They determine the order in which applications are allocated resources from a queue. The policies can be configured to prioritize certain types of applications or users over others, ensuring that critical workloads receive the necessary resources first.

There are several different Queue Ordering Policies available, each with its own unique benefits and drawbacks. For example, FIFO (First In First Out) is a simple policy that allocates resources based on the order in which requests were received by YARN.

Capacity Scheduler uses a more complex algorithm to allocate capacity based on user-defined rules and priorities.

Other policies include Fair Scheduler, Dominant Resource Fairness (DRF), and User-Limit-Factor-based ordering policies like Minimum User Percentage (MUP). Each policy has its strengths depending on your use case.

Understanding Queue Ordering Policies is crucial for effective yarn scheduling as it ensures efficient resource allocation while prioritizing critical workloads at all times.

Username and Application Driven Calculations

With username-driven calculations, the scheduler can allocate resources based on user-defined criteria such as priority or job size. This feature is particularly useful in multi-user environments where different users have varying needs for computing resources.

On the other hand, application-driven calculations allow for more granular control over resource allocation by taking into account specific requirements of each application running on the cluster. For example, some applications may require more memory than others or may be CPU-intensive while others are I/O bound.

By using both username and application driven calculations together, yarn schedulers can ensure that computing resources are allocated efficiently across all users and applications without any one user or app monopolizing them. This helps to prevent bottlenecks in processing power which could slow down overall performance.

Understanding how username and application driven calculations work within a yarn scheduler is crucial for optimizing your big data processing workflow.

Default Queue Mapping

This means that if no specific queue is specified, the application will automatically be assigned to the default queue. The benefit of this feature is that it simplifies resource management by ensuring all applications are assigned to a specific queue, even if they don’t explicitly request one.

Default Queue Mapping can also help with load balancing and prioritization. By assigning different queues with varying levels of resources and priorities, administrators can ensure fair allocation of resources across multiple applications.

Default Queue Mapping provides an efficient way for users to manage their yarn projects without having to worry about manually assigning them to queues every time they submit an application.


The priority level assigned to an application can be based on various factors such as the importance of the task or its deadline. In some cases, users may also have different priorities assigned to their applications based on their role in the organization.

The Fair Scheduler and Capacity Scheduler both support priority-based scheduling. With Fair Scheduler, each user’s jobs are given equal weightage by default but can be adjusted using weights or minimum share settings.

On the other hand, Capacity Scheduler allows administrators to define queues with different priorities and allocate resources accordingly.

Understanding how priority works in yarn schedulers is essential for efficient resource management and meeting project deadlines.


This can be useful for organizing your resources, tracking usage patterns, and enforcing policies. For example, you could use labels to tag certain applications as high-priority or low-priority, or group them by department or project.

In addition to assigning labels manually through the YARN CLI (command-line interface), some schedulers also support automatic labeling based on application properties such as user name or queue name. This can help simplify management tasks by reducing the need for manual intervention.

Labels provide a flexible way of managing resources in large-scale data processing environments like Hadoop clusters.

Queue Names

Queue names are used to identify and group applications that have similar resource requirements. This helps in managing resources efficiently, as it allows for better control over which applications get access to specific resources.

For instance, if you have a high-priority application that requires more CPU power than other applications, you can assign it to a separate queue with higher priority.

Understanding the different types of yarn schedulers available and how they work is crucial for any crocheter or knitter who wants to stay organized and on top of their projects.

Limiting Applications Per Queue

This feature is particularly useful for organizations with limited resources, as it ensures that no single application monopolizes all available resources. By limiting the number of applications per queue, yarn schedulers can ensure fair resource allocation and prevent any one user or group from dominating the system.

For example, if an organization has three queues – A, B and C – they may choose to limit each queue to only two running applications at any given time. This means that even if there are multiple users submitting jobs simultaneously across different queues, no more than six total jobs will be allowed to run concurrently.

By implementing this type of limitation on their yarn scheduler configuration file (yarn-site.xml), administrators can effectively manage resource usage across their entire cluster while ensuring fairness among all users.

Limiting Applications per Queue is just one way in which yarn schedulers help organizations optimize their big data processing workflows by managing resources efficiently and fairly.

Container Sizing

Containers are the basic unit of resource allocation in YARN, and their size can have a significant impact on performance. The size of containers should be chosen based on the specific needs of your application and cluster environment.

If you choose containers that are too small, you may end up with excessive overhead due to frequent container creation and destruction. On the other hand, if your containers are too large, they may not fit well into available resources or cause unnecessary delays in processing.

To determine an appropriate container size for your application workload requires careful consideration as it depends on various factors such as memory requirements per task instance (JVM heap), CPU utilization per task instance (number of cores), network bandwidth required by each task instance etc.

Choosing an optimal container sizing strategy is crucial for efficient resource management in big data processing using YARN schedulers like Capacity Scheduler or Fair Scheduler among others mentioned above.


What is the default scheduler in YARN?

The default scheduler in YARN is the Capacity Scheduler, although the Fair Scheduler is the default in some Hadoop distributions such as CDH.

What is the difference between YARN fair scheduler and capacity scheduler?

The difference between YARN fair scheduler and capacity scheduler is that fair scheduler allocates equal resources to all running jobs and shares resources between queues, while capacity scheduler assigns resources based on the capacity required by the organization.

What is fair scheduler in YARN?

Fair Scheduler in YARN is a technique that allocates resources to applications ensuring an equal distribution of resources over time, primarily considering memory by default.

How does the YARN capacity scheduler allocate resources among different queues?

The YARN capacity scheduler allocates resources among different queues by considering each queue’s capacity, ensuring that they get their guaranteed minimum share and utilizing free resources to satisfy demand.

What are the key features and benefits of the YARN fair scheduler?

Key features and benefits of YARN fair scheduler include: fair sharing of cluster resources, multi-tenancy support, hierarchical queues, preemption, and support for priorities.

How can one switch between different YARN schedulers and configure their settings?

To switch between different YARN schedulers and configure their settings, modify the ‘yarn-site.xml’ file and adjust the ‘yarn.resourcemanager.scheduler.class’ property accordingly.

Related Stories