How To: Scale .NET Applications

Overview
Scalability refers to the ability of an application to continue to meet its performance objectives with increased load. Typical performance objectives include application response time and throughput. When measuring performance, it is important to consider the cost at which performance objectives are achieved. For example, achieving a sub – second response time objective with prolonged 100% CPU utilization would generally not be an acceptable solution.
This How To is intended to help you make informed design choices and tradeoffs that in turn will help you to scale your application. An exhaustive treatment of hardware choices and features is outside the scope of this document.
After completing this How To, you will be able to:

  • Determine when to scale up versus when to scale out.
  • Quickly identify resource limitation and performance bottlenecks.
  • Identify common scaling techniques.
  • Identify scaling techniques specific to .NET technologies.
  • Adopt a step-by-step process to scale .NET applications.
  • Scale Up vs. Scale Out

    There are two main approaches to scaling:
    Scaling up. With this approach, you upgrade your existing hardware. You might replace existing hardware components, such as a CPU, with faster ones, or you might add new hardware components, such as additional memory. The key hardware components that affect performance and scalability are CPU, memory, disk, and network adapters. An upgrade could also entail replacing existing servers with new servers.
    Scaling out. With this approach, you add more servers to your system to spread application processing load across multiple computers. Doing so increases the overall processing capacity of the system.

    Pros and Cons
    Scaling up is a simple option and one that can be cost effective. It does not introduce additional maintenance and support costs. However, any single points of failure remain, which is a risk. Beyond a certain threshold, adding more hardware to the existing servers may not produce the desired results. For an application to scale up effectively, the underlying framework, runtime, and computer architecture must also scale up.

    Scaling out enables you to add more servers in the anticipation of further growth, and provides the flexibility to take a server participating in the Web farm offline for upgrades with relatively little impact on the cluster. In general, the ability of an application to scale out depends more on its architecture than on underlying infrastructure.

    When to Scale Up vs. Scale Out
    Should you upgrade existing hardware or consider adding additional servers? To help you determine the correct approach, consider the following:

  • Scaling up is best suited to improving the performance of tasks that are capable of parallel execution. Scaling out works best for handling an increase in workload or demand.
  • For server applications to handle increases in demand, it is best to scale out, provided that the application design and infrastructure supports it.
  • If your application contains tasks that can be performed simultaneously and independently of one another and the application runs on a single processor server, you should asynchronously execute the tasks. Asynchronous processing is more beneficial for I/O bound tasks and is less beneficial when the tasks are CPU bound and restricted to a single processor. Single CPU bound multithreaded tasks perform relatively slowly due to the overhead of thread switching. In this case, you can improve performance by adding an additional CPU, to enable true parallel execution of tasks.
  • Limitations imposed by the operating system and server hardware mean that you face a diminishing return on investment when scaling up. For example, operating systems have a limitation on the number of CPUs they support, servers have memory limits, and adding more memory has less effect when you pass a certain level (for example, 4 GB).
  • Load Balancing
    There are many approaches to load balancing. This section contains a discussion of the most commonly used techniques.

    Web Farm
    In a Web farm, multiple servers are load balanced to provide high availability of service. This feature is currently only available in Windows® 2000 Advanced Server and Datacenter Server. Figure 1 illustrates this approach.

    Load balancing in a Web farm

    Load balancing in a Web farm

     

    for more see: http://msdn.microsoft.com/en-us/library/ms979199.aspx

    Advertisements

    Leave a Reply

    Fill in your details below or click an icon to log in:

    WordPress.com Logo

    You are commenting using your WordPress.com account. Log Out / Change )

    Twitter picture

    You are commenting using your Twitter account. Log Out / Change )

    Facebook photo

    You are commenting using your Facebook account. Log Out / Change )

    Google+ photo

    You are commenting using your Google+ account. Log Out / Change )

    Connecting to %s