In addition to the usual visibility concerns, you should also avoid unnecessary public members to prevent any additional serialization overhead when you use the XmlSerializer class, which serializes all public members by default.
The CLR consists of a number of components that are responsible for managed code execution. These components are referred to throughout this chapter, so you should be aware of their purpose. Figure 5.1 shows the basic CLR architecture and components.
The way you write managed code significantly impacts the efficiency of the CLR components shown in Figure 5.1. By following the guidelines and techniques presented in this chapter, you can optimize your code, and enable the run-time components to work most efficiently. The purpose of each component is summarized below:
JIT compiler. The just-in-time (JIT) compiler converts the Microsoft intermediate language (MSIL) that is contained in an assembly into native machine code at run time. Methods that are never called are not JIT-compiled.
Garbage collector. The garbage collector is responsible for allocating, freeing, and compacting memory.
Structured exception handling. The runtime supports structured exception handling to allow you to build robust, maintainable code. Use language constructs such as try/catch/finally to take advantage of structured exception handling.
Threading. The .NET Framework provides a number of threading and synchronization primitives to allow you to build high performance, multithreaded code. Your choice of threading approach and synchronization mechanism impacts application concurrency; hence, it also impacts scalability and overall performance.
Security. The .NET Framework provides code access security to ensure that code has the necessary permissions to perform specific types of operations such as accessing the file system, calling unmanaged code, accessing network resources, and accessing the registry.
Loader. The .NET Framework loader is responsible for locating and loading assemblies.
Metadata. Assemblies are self-describing. An assembly contains metadata that describes aspects of your program, such as the set of types that it exposes, and the members those types contain. Metadata facilitates JIT compilation and is also used to convey version and security-related information.
Interop. The CLR can interoperate with various kinds of unmanaged code, such as Microsoft Visual Basic®, Microsoft Visual C++®, DLLs, or COM components. Interop allows your managed code to call these unmanaged components.
Remoting. The .NET remoting infrastructure supports calls across application domains, between processes, and over various network transports.
Debugging. The CLR exposes debugging hooks that can be used to debug or profile your assemblies.
Performance and Scalability Issues
This section is designed to give you a high-level overview of the major issues that can impact the performance and scalability of managed code. Subsequent sections in this chapter provide strategies, solutions, and technical recommendations to prevent or resolve these issues. There are several main issues that impact managed code performance and scalability:
Memory misuse. If you create too many objects, fail to properly release resources, preallocate memory, or explicitly force garbage collection, you can prevent the CLR from efficiently managing memory. This can lead to increased working set size and reduced performance.
Resource cleanup. Implementing finalizers when they are not needed, failing to suppress finalization in the Dispose method, or failing to release unmanaged resources can lead to unnecessary delays in reclaiming resources and can potentially create resource leaks.
Improper use of threads. Creating threads on a per-request basis and not sharing threads using thread pools can cause performance and scalability bottlenecks for server applications. The .NET Framework provides a self-tuning thread pool that should be used by server-side applications.
Abusing shared resources. Creating resources per request can lead to resource pressure, and failing to properly release shared resources can cause delays in reclaiming them. This quickly leads to scalability issues.
Type conversions. Implicit type conversions and mixing value and reference types leads to excessive boxing and unboxing operations. This impacts performance.
Misuse of collections. The .NET Framework class library provides an extensive set of collection types. Each collection type is designed to be used with specific storage and access requirements. Choosing the wrong type of collection for specific situations can impact performance.
Inefficient loops. Even the slightest coding inefficiency is magnified when that code is located inside a loop. Loops that access an object’s properties are a common culprit of performance bottlenecks, particularly if the object is remote or the property getter performs significant work.
ASP.NET Performance Monitoring, and When to Alert Administrators : http://msdn.microsoft.com/en-us/library/ms972959.aspx
Your project goals must include measurable performance objectives. From the very beginning, design so that you are likely to meet those objectives. Do not over-research your design. Use the planning phase to manage project risk to the right level for your project. To accomplish this, you might ask the following questions: How fast does your application need to run? At what point does the performance of your application become unacceptable? How much CPU or memory can your application consume? Your answers to these questions are your performance objectives. They help you create a baseline for your application’s performance. These questions help you determine if the application is quick enough.
Performance objectives are usually specified in terms of the following:
● Response time. Response time is the amount of time that it takes for a server to respond to a request.
● Throughput. Throughput is the number of requests that can be served by your application per unit time. Throughput is frequently measured as requests or logical transactions per second.
● Resource utilization. Resource utilization is the measure of how much server and network resources are consumed by your application. Resources include CPU, memory, disk I/O, and network I/O.
● Workload. Workload includes the total number of users and concurrent active users, data volumes, and transaction volumes.
You can identify resource costs on a per-scenario basis. Scenarios might include browsing a product catalog, adding items to a shopping cart, or placing an order. You can measure resource costs for a certain user load, or you can average resource costs when you test the application by using a certain workload profile. A workload profile consists of a representative mix of clients performing various operations.
Performance modeling helps you evaluate your design decisions against your objectives early on,before committing time and resources. Invalid design assumptions and poor design practices may mean that your application can never achieve its performance objectives. The performance modeling process model presented in this guide is summarized in Figure.
The performance modeling process consists of the following steps:
1. Identify key scenarios. Identify those scenarios in which performance is important and the ones that pose the most risk to your performance objectives.
2. Identify workloads. Identify how many users, and how many concurrent users, your system needs to support.
3. Identify performance objectives. Define performance objectives for each of your key scenarios. Performance objectives reflect business requirements.
4. Identify budget. Identify your budget or constraints. This includes the maximum execution time in which an operation must be completed and resource utilization such as CPU, memory, disk I/O, and network I/O constraints.
5. Identify processing steps. Break your scenarios down into component processing steps.
6. Allocate budget. Spread your budget determined in Step 4 across your processing steps determined in Step 5 to meet the performance objectives you defined in Step 3.
7. Evaluate. Evaluate your design against objectives and budget. You may need to modify design or spread your response time and resource utilization budget differently to meet your performance objectives.
8. Validate. Validate your model and estimates. This is an ongoing activity and includes prototyping, testing, and measuring.
First, and likely foremost, the data produced from XML serialization is significantly larger than that from the binary serializer
Second: XmlSerializer uses reflection to access the public properties on your types. Your properties might not be simple get/set. In contrast, the BinaryFormatter looks directly at fields, so you are not affected by any less-than-efficient properties you might be serializing.
Note that both the XmlSerializer and BinaryFormatter provide a mechanism whereby you – as the author of the serialized types – can control serialization. See IXmlSerializable and ISerializable. AFAIK, only BinaryFormatter allows you to customize serialization when you’re not the author of the types being serialized (see ISerializationSurrogate).
Actually it is the BinaryFormatter that uses reflection and XmlSerializer generates code to Serialize and Deserialize and hence getting and setting values is actually faster in XmlSerializer (unless your property getter and setter does non trivial tasks)
However BinaryFormatter is usually found to be faster in cases where the serialized data is predominantly numeric (array of ints etc.) This is due to the fact that the XmlSerializer has to convert the data to/from string. XmlSerializer is also found to be slower than BinaryFormatter in cases where the objects are deeply nested (e.g. A contains B, B contains C, C contains D and so on)
XmlSerializer is significantly slower the first time since it works by generating helper classes to serialize and deserialize and the generated serializer has to be compiled. The subsequent invocation should be fast. To avoid the first time slow down one can use sgen.exe. You can find more information about this at http://msdn2.microsoft.com/en-us/library/bk3w6240(vs.80).aspx
If performance is the main reason for choosing the serialization technology then I strongly suggest you to look at the DataContractSerializer that ships in .NET 3.0 (as part of WCF). When used with the binary XML (also shipped as part of WCF) the DataContractSerializer outperforms both serializers in most of the scenarios. You can find more information about the DataContractSerializer at http://msdn2.microsoft.com/en-us/library/system.runtime.serialization.datacontractserializer.aspx. You can find info about the Binary XML at http://blogs.msdn.com/drnick/archive/2007/04/09/what-a-binary-encoding-means.aspx and http://www.wintellect.com/cs/blogs/jsmith/archive/2006/03/28/wcf-xmldictionaries.aspx
DataContract is the default serialization programming model for WCF. However WCF supports more than just the types marked wth DataContract attribute. It supports serialization of the following kinds of types in the default mode.
CLR built-in types
Byte array, DateTime, TimeSpan, GUID, Uri, XmlQualifiedName, XmlElement and XmlNode array
Types marked with DataContract or CollectionDataContract attribute
Types that implement IXmlSerializable
Arrays and Collection classes including List<T>, Dictionary<K,V> and Hashtable.
Types marked with Serializable attribute including those that implement ISerializable.
Some types may implement more than one of the above programming model. In such cases a programming model is chosen based on its priority as given by the above list. For example, Hashtable is a collection class but also implements ISerializable and it is serialized as a collection type. DataSet implements both IXmlSerializable and ISerializable and it is serialized as IXmlSerializable.
ASP.NET System Performance Counters
ASP.NET supports the ASP.NET system performance counters listed in the following table. These counters aggregate information from all ASP.NET applications on a Web server computer.
There is a significant difference between the State Server Sessions counters found in the ASP.NET performance object, which apply only to the server computer on which the state server is running, and the Sessions counters found in the ASP.NET Applications performance object, which apply only to user sessions that occur in-process.
The number of times that an application has been restarted during the Web server’s lifetime. Application restarts are incremented each time an Application_OnEnd event is raised. An application restart can occur because of changes to the Web.config file, changes to assemblies stored in the application’s Bin directory, or when an application must be recompiled due to numerous changes in ASP.NET Web pages. Unexpected increases in this counter can mean that problems are causing your Web application to recycle. In such circumstances you should investigate as soon as possible.
This value is reset to zero every time the Internet Information Services (IIS) host is restarted.
The number of applications running concurrently on the server computer.
The number of requests that have been disconnected due to a communication failure.
The number of requests waiting for service from the queue. When this number starts to increment linearly with increased client load, the Web server computer has reached the limit of concurrent requests that it can process. The default maximum for this counter is 5,000. You can change this setting in the Machine.config file.
The total number of requests not executed because of insufficient server resources to process them. This counter represents the number of requests that return a 503 HTTP status code, indicating that the server is too busy.
Request Wait Time
The number of milliseconds that the most recent request waited in the queue for processing.
Session State Server Connections Total
The total number of session-state connections made to the computer on which out-of-process session-state data is stored. For more information, see Session-State Modes.
Session SQL Server Connections Total
The total number of session-state connections made to the Microsoft SQL Server database in which session-state data is stored. For more information, see Session-State Modes.
State Server Sessions Abandoned
The number of user sessions that have been explicitly abandoned. These are sessions that are ended by specific user actions, such as closing the browser or navigating to another site. This counter is available only on the computer where the state server service (aspnet_state) is running.
State Server Sessions Active
The number of currently active user sessions. This counter is available only on the computer where the state server service (aspnet_state) is running.
State Server Sessions Timed Out
The number of user sessions that have become inactive through user inaction. This counter is available only on the computer where the state server service (aspnet_state) is running.
State Server Sessions Total
The number of sessions created during the lifetime of the process. This counter is the total value of State Server Sessions Active, State Server Sessions Abandoned, and State Server Sessions Timed Out. This counter is available only on the computer where the state server service (aspnet_state) is running.
Worker Process Restarts
The number of times a worker process has been restarted on the server computer. A worker process can be restarted if it fails unexpectedly or when it is intentionally recycled. If this counter increases unexpectedly, you should investigate as soon as possible.
Worker Process Running
The number of worker processes running on the server computer.
ASP.NET Application Performance Counters
ASP.NET supports the application performance counters listed in the following table. These counters enable you to monitor the performance of a single instance of an ASP.NET application. A unique instance named __Total__ is available for these counters. This instance aggregates counters for all applications on a Web server (similar to the global counters described earlier in this topic). The __Total__ instance is always available. The counters will display zero when no applications are currently executing on the server.
The number of requests that are using anonymous authentication.
The number of requests per second that are using anonymous authentication.
Cache Total Entries
The total number of entries in the cache. This counter includes both use of the cache by the ASP.NET page framework and use of the application cache through cache APIs.
Cache Total Hits
The total number of hits from the cache. This counter includes both use of the cache by the ASP.NET page framework and use of the application cache through cache APIs.
Cache Total Misses
The number of failed cache requests per application. This counter includes both use of the cache by the ASP.NET page framework and use of the application cache through cache APIs.
Cache Total Hit Ratio
The ratio of hits to misses for the cache. This counter includes both use of the cache by the ASP.NET page framework NET and use of the application cache through cache APIs.
Cache Total Turnover Rate
The number of additions and removals to the cache per second, which is useful in helping to determine how effectively the cache is being used. If the turnover rate is high, the cache is not being used efficiently.
Cache API Entries
The total number of entries in the application cache.
Cache API Hits
The total number of hits from the cache when it is accessed only through the external cache APIs. This counter does not track use of the cache by the ASP.NET page framework.
Cache API Misses
The total number of failed requests to the cache when accessed through the external cache APIs. This counter does not track use of the cache by the ASP.NET page framework.
Cache API Hit Ratio
The cache hit-to-miss ratio when accessed through the external cache APIs. This counter does not track use of the cache by the ASP.NET page framework.
Cache API Turnover Rate
The number of additions and removals to the cache per second when used through the external APIs (excluding use by the ASP.NET page framework). It is useful in helping determine how effectively the cache is being used. If the turnover rate is high, then the cache is not being used effectively.
The total number of compilations that have taken place during the lifetime of the current Web server process. Compilation occurs when a file with an .aspx, .asmx, .ascx, or .ashx file name extension, or a code-behind source file, is dynamically compiled on the server.
This number will initially climb to a peak value as requests are made to all parts of an application. Once compilation occurs, however, the resulting compiled output is saved to disk, where it is reused until its source file changes. This means that even in the event of a process restart the counter can remain at zero (inactive) until the application is modified or redeployed.
The number of requests that occur while debugging is enabled.
Errors During Preprocessing
The number of errors that occurred during parsing, excluding compilation and run-time errors.
Errors During Compilation
The number of errors that occur during dynamic compilation, excluding parser and run-time errors.
Errors During Execution
The total number of errors that occur during the execution of an HTTP request, excluding parser and compilation errors.
Errors Unhandled During Execution
The total number of unhandled errors that occur during the execution of HTTP requests. An unhandled error is any run-time exception that is not trapped in user code and enters the ASP.NET internal error-handling logic. Exceptions to this rule occur when:
Custom errors are enabled, an error page is defined, or both.
The Page_Error event is defined in user code and either the error is cleared (using the ClearError method) or a redirect is performed.
Errors Unhandled During Execution/Sec
The number of unhandled exceptions per second that occur during the execution of HTTP requests.
The total number of errors that occur during the execution of HTTP requests, including any parser, compilation, or run-time errors. This counter is the sum of the Errors During Compilation, Errors During Preprocessing, and Errors During Execution counters. A well-functioning Web server should not generate errors. If errors occur in your ASP.NET Web application, they may skew any throughput results because of very different code paths for error recovery. Investigate and fix any bugs in your application before performance testing.
The number of errors per second that occur during the execution of HTTP requests, including any parser, compilation, or run-time errors.
Output Cache Entries
The total number of entries in the output cache.
Output Cache Hits
The total number of requests serviced from the output cache.
Output Cache Misses
The number of failed output-cache requests per application.
Output Cache Hit Ratio
The percentage of total requests serviced from the output cache.
Output Cache Turnover Rate
The number of additions and removals to the output cache per second. If the turnover rate is high, the cache is not being used effectively.
Pipeline Instance Count
The number of active request pipeline instances for the specified ASP.NET application. Since only one execution thread can run within a pipeline instance, this number gives the maximum number of concurrent requests that are being processed for a given application. In most circumstances it is better for this number to be low when under load, which signifies that the CPU is well utilized.
Request Bytes In Total
The total size in bytes of all requests.
Request Bytes Out Total
The total size in bytes of responses sent to a client. This does not include HTTP response headers.
The number of requests currently executing.
The total number of failed requests. Any status codes greater than or equal to 400 will increment this counter.
Requests that cause a 401 status code increment this counter and the Requests Not Authorized counter. Requests that cause a 404 or 414 status code increment this counter and the Requests Not Found counter. Requests that cause a 500 status code increment this counter and the Requests Timed Out counter.
Requests Not Found
The number of requests that failed because resources were not found (status code 404 or 414).
Requests Not Authorized
The number of requests that failed due to no authorization (status code 401).
The number of requests that executed successfully (status code 200).
Requests Timed Out
The number of requests that timed out (status code 500).
The total number of requests since the service was started.
The number of requests executed per second. This represents the current throughput of the application. Under constant load, this number should remain within a certain range, barring other server work (such as garbage collection, cache cleanup thread, external server tools, and so on).
The number of sessions currently active. This counter is supported only with in-memory session state.
The number of sessions that have been explicitly abandoned. This counter is supported only with in-memory session state.
Sessions Timed Out
The number of sessions that timed out. This counter is supported only with in-memory session state.
The total number of sessions. This counter is supported only with in-memory session state.
The number of transactions canceled for all active ASP.NET applications.
The number of transactions committed for all active ASP.NET applications.
The number of transactions in progress for all active ASP.NET applications.
The total number of transactions for all active ASP.NET applications.
The number of transactions started per second for all active ASP.NET applications.
An ASP.NET HTTP handler is the process (frequently referred to as the “endpoint”) that runs in response to a request made to an ASP.NET Web application. The most common handler is an ASP.NET page handler that processes .aspx files. When users request an .aspx file, the request is processed by the page through the page handler. You can create your own HTTP handlers that render custom output to the browser.
An HTTP module is an assembly that is called on every request that is made to your application. HTTP modules are called as part of the ASP.NET request pipeline and have access to life-cycle events throughout the request. HTTP modules let you examine incoming and outgoing requests and take action based on the request.
In IIS 6.0, the ASP.NET request pipeline is separate from the Web server request pipeline. In IIS 7.0, the ASP.NET request pipeline and the Web server request pipeline can be integrated into a common request pipeline. In IIS 7.0, this is referred to as Integrated mode. The unified pipeline has several benefits for ASP.NET developers. For example, it lets managed-code modules receive pipeline notifications for all requests, even if the requests are not for ASP.NET resources. However, if you want, you can run IIS 7.0 in Classic mode, which emulates ASP.NET running in IIS 6.0. For more information, see ASP.NET Application Life Cycle Overview for IIS 7.0.
ASP.NET HTTP modules are like ISAPI filters because they are invoked for all requests. However, they are written in managed code and are fully integrated with the life cycle of an ASP.NET application. You can put custom module source code in the App_Code folder of your application, or you can put compiled custom modules as assemblies in the Bin folder of an application.
ASP.NET uses modules to implement various application features, which includes forms authentication, caching, session state, and client script services. In each case, when those services are enabled, the module is called as part of a request and performs tasks that are outside the scope of any single page request. Modules can consume application events and can raise events that can be handled in the Global.asax file. For more information about application events, see ASP.NET Application Life Cycle Overview for IIS 5.0 and 6.0 and ASP.NET Application Life Cycle Overview for IIS 7.0.
ASP.NET 2.0 has many secrets, which when revealed can give you big performance and scalability boost. For instance, there are secret bottlenecks in Membership and Profile provider which can be solved easily to make authentication and authorization faster. Furthermore, ASP.NET HTTP pipeline can be tweaked to avoid executing unnecessary code that gets hit on each and every request. Not only that, ASP.NET Worker Process can be pushed to its limit to squeeze out every drop of performance out of it. Page fragment output caching on the browser (not on the server) can save significant amount of download time on repeated visits. On demand UI loading can give your site a fast and smooth feeling. Finally, Content Delivery Networks (CDN) and proper use of HTTP Cache headers can make your website screaming fast when implemented properly. In this article, you will learn these techniques that can give your ASP.NET application a big performance and scalability boost and prepare it to perform well under 10 times to 100 times more traffic.
In this article we will discuss the following techniques:
- ASP.NET pipeline optimization
- ASP.NET process configuration optimization
- Things you must do for ASP.NET before going live
- Content Delivery Network
- Caching AJAX calls on browser
- Making best use of Browser Cache
- On demand progressive UI loading for fast smooth experience
- Optimize ASP.NET 2.0 Profile provider
- How to query ASP.NET 2.0 Membership tables without bringing down the site
- Prevent Denial of Service (DOS) attack
The above techniques can be implemented on any ASP.NET website, especially those who use ASP.NET 2.0’s Membership and Profile provider.
You can learn a lot more about performance and scalability improvement of ASP.NET and ASP.NET AJAX websites from my book – Building a Web 2.0 portal using ASP.NET 3.5.