Passing Parameters in C#

In C#, parameters can be passed either by value or by reference. Passing parameters by reference allows function members (methods, properties, indexers, operators, and constructors) to change the value of the parameters and have that change persist. To pass a parameter by reference, use the ref or out keyword. For simplicity, only the ref keyword is used in the examples of this topic. For information on the difference between ref and out, see ref, out, and Passing Arrays Using ref and out.

This topic includes the following sections:

It also includes the following examples:

Example Demonstrates Uses ref or out
1 Passing value types by value No
2 Passing value types by reference Yes
3 Swapping value types (two integers) Yes
4 Passing reference types by value No
5 Passing reference types by reference Yes
6 Swapping reference types (two strings) Yes


SQL Server Optimization

This document is to describe different ways of improving the performance of SQL Server queries. Most of this document will describe index optimization, with occasional references to particular code snippets. In other words, this document will describe how to achieve the best performance, given the tables and queries to run against.
Database design issues and entity-relationship modeling techniques are out of scope of this document, even though flawed design can severely impact the performance in many ways.

Microsoft SQL Server 9.0 Technical Articles
SQL Server Optimization


DECLARE @TableName sysname
DECLARE cur_showfragmentation CURSOR FOR
SELECT table_name FROM information_schema.tables WHERE table_type = ‘base table’
OPEN cur_showfragmentation
FETCH NEXT FROM cur_showfragmentation INTO @TableName
SELECT ‘Show fragmentation for the ‘ + @TableName + ‘ table’
FETCH NEXT FROM cur_showfragmentation INTO @TableName
CLOSE cur_showfragmentation
DEALLOCATE cur_showfragmentation

Call stored procedure inside other sp

 @SQL_Error INT
— Start transaction
@@ERROR != 0
— operation1
IF @@ERROR != 0
 GOTO Rollback_Point
— Call stocata 2
EXEC @RetCode = spTestTran_Savepoint
SET @SQL_Error = @@ERROR
IF @SQL_Error != 0 OR @RetCode != 0
 GOTO Rollback_Point
RETURN 0 — normal exit
ROLLBACK TRANSACTION — undo any changes
RETURN -1 — return with error

SQL Server Lock Contention: NOLOCK and ROWLOCK

Relational databases, like Microsoft’s SQL Server, use locks to prevent multiple users from making conflicting modifications to a set of data. When a set of data is locked by a user, no other users can modify that same set of data until the first user finishes modifying the data and relinquishes the lock. There are exceptions, but let’s not go there.
Some databases, SQL Server included, use locks to prevent users from seeing uncommitted modifications. In these systems if UserA is modifying some set of data, then UserB and all the rest of the users must wait until UserA is done modifying that data before they can even get a shot at reading the data, let alone modifying it.
Databases place locks at all levels of their physical hierarchies: rows, pages (typically a few KB of rows), extents (typically a few pages), entire tables, and entire databases. Some databases only use fine-grained row locks, others don’t do row locks at all and only allow rough-grained page, extent, table, and database locks. Most databases, SQL Server included, support row locking, but often use rough-grained locks. This is because lock management is a royal pain. Locks aren’t small or simple entities, so if you only do row-level locking, you can get yourself into a world of pain: a million-row update can easily swamp memory and be a bear to manage.

Lock Contention Described
Databases that don’t do just row-level locking often use a technique called lock escalation to achieve better performance. Unless its clear from the outset that a whole table will be modified, these databases start off using row locks, and they make plans to trade these locks in for rough-grained locks later if too many rows are modified.
Unfortunately, lock escalation introduces and amplifies a whole new problem: deadlocks. If two users try to modify semantically-unrelated but physically-near data in two separate tables in reverse order, both users will start off with row locks, then try to upgrade them to page locks, and the situation will be that each user wants something the other user has, so they’re stuck. This is called a deadlock.

For example:

•UserA modifies some rows in TableA, causing a page lock affecting not just the rows UserA modified, but many others

•UserB modifies some rows in TableB, causing a page lock affecting not just the rows UserA modified, but many others

•UserA wants to modify some rows that UserB has locked (but not modified) in TableB

•UserB wants to modify, or maybe just access, some rows that UserA has locked (but not modified) in TableA

Something’s gotta give. To deal with this problem, the database occasionally looks for deadlocks, and kills off one of the transactions so the other can finish. It usually kills the one that’s made the least modifications so that it minimizes the cost of rolling back changes. Databases that use only row-level locking almost never have this problem because two users rarely want to modify the exact same row, and even more rarely do they attain locks in the perfectly poor order needed to cause a deadlock.
Also, databases like this use lock timeouts to prevent users from waiting too long for a lock. Query timeouts also factor in here. You can write code to retry queries that time out, but this only automates database congestion. Any timeout that is often reached will only serve to worsen the user experience. Things simply should not take that long.
In practice and under high load, SQL Server’s locking system, which is based on lock escalation, does not perform well. Why? Lock contention. Lock contention is the problems of deadlocks and waiting for locks. In a system in which many users are modifying the database at once, and many more users are trying to access the database concurrently, the locks are flying, users spend a lot of time waiting to attain locks, deadlocks are frequent, and users are far from happy.

Using NOLOCK politely asks SQL Server to ignore locks and read directly from the tables. This means you completely circumvent the lock system, which is a major performance and scalability improvement. However, you also completely circumvent the lock system, which means your code is living dangerously. You might read the not-necessarily-valid uncommitted modifications of a running transaction. This is a calculated risk.

Using ROWLOCK politely asks SQL Server to only use row-level locks. You can use this in SELECT, UPDATE, and DELETE statements. You’d think that an UPDATE in which you specify the primary key would always cause a row lock, but when SQL Server gets a batch with a bunch of these, and some of them happen to be in the same page (depending on this situation, this can be quite likely, e.g. updating all files in a folder, files which were created at pretty much the same time), you’ll see page locks, and bad things will happen. And if you don’t specify a primary key for an UPDATE or DELETE, there’s no reason the database wouldn’t assume that a lot won’t be affected, so it probably goes right to page locks, and bad things happen.
By specifically requesting row-level locks, these problems are avoided. However, be aware that if you are wrong and lots of rows are affected, either the database will take the initiative and escalate to page locks, or you’ll have a whole army of row locks filling your server’s memory and bogging down processing. One thing to be particularly aware of is the “Management/Current Activity” folder with Enterprise Manager. It takes a long time to load information about a lot of locks. The information is valuable, and this technique is very helpful, but don’t be surprised if you see hundreds of locks in the “Locks/Processes” folder after employing this technique. Just be glad you don’t have lock timeouts or deadlocks.

source: SQL Server Lock Contention Tamed: The Joys Of NOLOCK and ROWLOCK