Creating SQL Server columns: A best practices guide

  1. Use the smallest data type necessary to implement the required functionality. I have worked on several systems which used NUMERIC or FLOAT data types even though not a single row contained digits to the right of the decimal point. An easy way to optimize such systems is to simply change the data type from NUMERIC to INTEGER.
  2. Ensure that each column has a descriptive name; doing so makes database more self-documenting and easy to understand for those who didn’t develop it. Do not call columns “column 2,” “ID” or similar names. If you must abbreviate column names be sure to create a data dictionary denoting what data element each column is supposed to store.
  3. Many tables have “natural” keys; these are columns that have a business meaning, such as customer or account name. Although natural keys are immediately identifiable by business users, they aren’t always unique and they tend to change over time. Consider adding “surrogate” keys – columns that have no business meaning but can uniquely identify each row. Identity columns are a great example of surrogate keys.
    While you could use a combination of natural keys to uniquely identify a record, joining tables on multiple columns will normally be slower than joining the same tables based on a single column with a small data type (such as INTEGER). If you do use surrogate keys be sure that their name includes the table / entity name. For example, do not add a column called “ID” to each table. Instead use “customer_id”, “supplier_id”, “store_id” and so forth.
  4. Each table allows up to 1,024 columns, but normally you don’t need nearly as many columns. For transactional systems ensure the data model is highly normalized; this means the same data element (customer address, customer phone number, product description, etc) should not be repeated in multiple tables. For reporting systems you can allow some redundancy, but only if thorough testing confirms that redundant columns improve query performance.
  5. If possible and appropriate, use fixed-length as opposed to variable-length data types. For example, if you know product code will always be limited to 50 characters use CHAR(50) as opposed to VARCHAR(50). Variable length columns impose an overhead that isn’t always necessary.
  6. Use UNICODE data types (NCHAR, NVARCHAR, NTEXT) only when necessary. If your database will only contain European characters, then you shouldn’t have to use UNICODE data types. Realize that UNICODE data types use twice-as-much storage as their non-UNICODE counterparts.
  7. Be sure to specify appropriate collation for your string columns. If you don’t specify the collation, SQL Server will use the collation defined at the database level. Collation determines the character set and sort order supported by the column. If the correct collation isn’t specified, you could see unexpected results when retrieving string data.
  8. Attempt to configure column null-ability correctly. If the column should always have a value, then configure it as NOT NULL. Using default constraints is more efficient than having columns allowing NULL values. A NULL value isn’t equal to anything else – empty string, zero or even other NULL values. A NULL denotes that the value is unknown. With a character column, you can often use a default value of “unknown” as opposed to allowing NULL values.
  9. Use large variable length data types sparingly. Variable length columns are normally stored on the same data pages as the rest of the record. However, if the combined size of the variable-length columns exceeds 8,000 characters, they’re stored on row-overflow data pages, which imposes additional overhead during data retrieval.
  10. Avoid using TEXT, NTEXT and IMAGE data types for any newly created table columns. These data types are deprecated and might not be available in future versions of SQL Server. Use VARCHAR(MAX), NVARCHAR(MAX) and VARBINARY(MAX) data types instead.
  11. If you must implement multiple large columns, such as VARCHAR(MAX) or VARCHAR(4000) for example, consider splitting your table in two or more tables with fewer columns. This technique is sometimes called vertical partitioning. These tables will have one-to-one relationships among them and each will have a common primary key column.
  12. You cannot use columns with large variable length data types, such as VARBINARY(MAX), NVARCHAR(MAX) or VARCHAR(MAX) for clustered or non-clustered index keys. However, consider adding such columns as included columns to your non-clustered indexes. Doing so can help you have more “covered” queries, which can be resolved by seeking through the index as opposed to scanning the table. The included columns are not counted towards the 900 byte limit for index keys.They are also not counted towards the 16 column limit of index keys.
  13. The TIMESTAMP data type is a misnomer because it doesn’t track date or time values. Rather SQL Server uses a column with this data type to track the sequence of data modifications. Instead of checking each column within the table you can simply examine the values of the TIMESTAMP column to determine whether any column values have been modified.
  14. Consider using BIT data type columns for Boolean values, as opposed to storing “TRUE / FALSE”, “yes/no” or other character strings. SQL Server can store up to 8 columns with BIT data type in a single byte. One scenario where this could be handy is when you’re using “soft deletes.” Instead of physically removing a record from a table, you simply tag it as deleted, by flipping the bit value of the “deleted” column to 1.

Keep these pointers in mind when creating and maintaining SQL Server tables. Although creating tables might seem like a trivial exercise, database architects should carefully weigh the consequences of each option when building large scale systems.

(sursa: http://searchsqlserver.techtarget.com)

What is the difference between clustered and nonclustered indexes?

There are clustered and nonclustered indexes. A clustered index is a special type of index that reorders the way records in the table are physically stored. Therefore table can have only one clustered index. The leaf nodes of a clustered index contain the data pages.Index Optimization Tips

A nonclustered index is a special type of index in which the logical order of the index does not match the physical stored order of the rows on disk. The leaf node of a nonclustered index does not consist of the data pages. Instead, the leaf nodes contain index rows.

See this article fot more details:

Can’t View CHM Files in IE7 – A FIX

If you are having problems viewing CHM files (like the new WCF documentation CHM files), you might be seeing a blank IE screen when you click on any of the subject headings contained ni the file. If this is the case, right-click on the CHM file and select Properties from the provided menu. You can then click on the Unblock button at the bottom. This will then allow the CHM file to be viewed using IE as it should be.

Transact-SQL Fundamentals

T-SQL is designed to add structure to the handling of sets of data. Because of this, it does not provide several language features that application development needs. If you do a lot of application programming development, you’ll find that T-SQL is in many ways the exact opposite of how you think when programming in VB, C#, Java, or any other structured development language.

T-SQL Batches
A query is a single SQL DML statement, and a batch is a collection of one or more T-SQL statements. The entire collection is sent to SQL Server from the front-end application as a single unit of code. SQL Server parses the entire batch as a unit. Any syntax error will cause the entire batch to fail, meaning that none of the batch will be executed. However, the parsing does not check any object names or schemas because a schema may change by the time the statement is executed.

Terminating a Batch
A SQL script file or a Query Analyzer window may contain multiple batches. If this is the case, a batch-separator keyword terminates each batch. By default, the batch-separator keyword is go (similar to how the Start button is used to shut down Windows). The batch-separator keyword must be the only keyword in the line. Any other characters, even a comment, on the same line will neutralize the batch separator.
The batch separator is actually a function of SQL Server Management Studio, not SQL Server.  It can be modified in the Query Execution page by selecting Tools➪Options, but I wouldn’t recommend creating a custom batch separator (at least not for your friends). Terminating a batch will kill all local variables, temporary tables, and cursors created by that batch.

DDL Commands
Some T-SQL DDL commands, such as Create Procedure, are required to be the first command in the batch. Very long scripts that create several objects often include numerous go batch terminators. Because SQL Server evaluates syntax by the batch, using go throughout a long script also helps locate syntax errors.

Switching Databases
Interactively, the current database is indicated in the SQL Editor toolbar and can be changed there. In code, the current database is selected with the use command. Use can be inserted within a batch to specify the database from that point on:
USE CHA2
It’s a good practice to explicitly specify the correct database with the use command, rather than assume that the user will select the correct database prior to running the script.

Executing Batches
A batch can be executed in several ways:
✦ A complete SQL script (including all the batches in the script) may be executed by opening the .sql file with SQL Server Management Studio’s SQL Editor and pressing F5, clicking the ! Execute toolbar button, or selecting Query -> Execute. I have altered my Windows file settings so that double-clicking a .SQL file opens Query Analyzer.
✦ Selected T-SQL statements may be executed within SQL Server Management Studio’s SQL Editor by means of highlighting those commands and pressing F5, clicking the ! Execute toolbar button, or selecting Query -> Execute.
✦ An application can submit a T-SQL batch using ADO or ODBC for execution.
✦ A SQL script may be executed by means of running the SQLCmd command-line utility and passing the SQL script file as a parameter.
✦ The SQLCmd utility has several parameters and may be configured to meet nearly any command-line need.

Executing a Stored Procedure
When calling a stored procedure within a SQL batch, the exec command executes the stored procedure with a few special rules. In a sense, because line returns are meaningless to SQL Server, the exec command is serving to terminate the previous T-SQL command. If the stored-procedure call is the first line of a batch (and if it’s the only line, then it’s also the first line), the stored-procedure call doesn’t require the exec command. However, including the exec command anyway won’t cause any problems and prevents an error if the code is cut and pasted later.
The following two system-stored–procedure calls demonstrate the use of the exec command within a batch:
sp_help;
EXEC sp_help;

Row versioning database options

select * from sys.databases

Use the sys.databases catalog view to determine the state of both row versioning database options.

All updates to user tables and some system tables stored in master and msdb generate row versions.

The ALLOW_SNAPSHOT_ISOLATION option is automatically set ON in the master and msdb databases, and cannot be disabled.

Users cannot set the READ_COMMITTED_SNAPSHOT option ON in master, tempdb, or msdb.

The twenty European destinations of excellence in tourism and intangible heritage

Commission Vice President Günter Verheugen awarded the “European Destinations of Excellence” (EDEN) to this year’s twenty best destinations in tourism and intangible heritage at the European Annual Tourism Forum in Bordeaux on 18th of September. EDEN, launched in 2006, promotes the specific characteristics of European destinations and offers particular support to those where a competitive tourism is developed as to ensure social, cultural and environmental sustainability of tourism. It offers an opportunity to create a platform for the exchange of good practices at European level, while rewarding sustainable forms of tourism and successful business models.

The aim of EDEN is twofold:

  • To draw attention to the value, diversity and shared characteristics of European tourist destinations and to promote destinations where the economic growth objective is pursued to ensure the social, cultural and environmental sustainability of tourism.
  • It also creates awareness of Europe’s tourist diversity and quality, besides promoting Europe as the foremost tourism destination in the world.

 

The twenty awarded European destinations of excellence in 2008, chosen by national juries.

Austria: Steirisches Vulkanland

Steirisches Vulkanland or Styrian volcano land is rich in volcanic formations, thermal water resources, historic sacral and architectural monuments, folk art, as well as in traditions expressed in a characteristic culture of festivals and celebrations.

Belgium: Ath

Ath is famous for the Ducasse – a procession of giants, a parade of characters that have been gathering for over 500 years and draws visitors into a charming medieval festival. Folklore inspired from the giants and related traditions has boosted the city’s image.

Bulgaria: Belogradchik, see also special link

Myths, legends, even traces of ancient Thrace are waiting to be discovered by visitors to Belogradchik, or ‘small white town’, situated in the foothills of the Balkan mountains in north west Bulgaria.

Croatia: Đurđevac, the Rosster town

The orchards, meadows and vineyards make up the county of Koprivnica-Križevci, at the heart of which lies Đurđevac. The town’s historical, cultural and traditional heritage is based on the Legend of the Rooster.

Cyprus: Agros, see also special link

Agros is an ideal year-round rural destination to visit that has developed its famous rosewater industry and offers unique opportunities to participate in celebrations of local cultural heritage and nature’s beauty, like the rose festival in May.

Estonia: Viljandi

Sometimes called the cultural capital of Estonia, it is Viljandi’s unique location, people, nature and architecture which make the town so special. set it apart. The heritage of the place has been kept vibrant by its people especially through songs and the famous Viljandi folk music festival.

Finland: Wild Taiga

Wild Taiga’s location at Finland’s easternmost borderline makes it an enchanting place featuring green coniferous forests, esker valleys, clear waters and rich wild animal stock. It promotes the local way of life of its people, traditions and brings nature closer to individuals.

France: The tourist wine route of the Jura

A special wine festival, the annual Percée du Vin Jaune – now one of the biggest in France – has successfully been turned into a tourist attraction. Visitors can follow the 80 km-long route, highlighting the historical and cultural heritage of the region as well as its gastronomy.

Greece: Prefecture of Grevena

The prefecture of Grevena is a mountainous area with a rich natural habitat: forests, rivers, valleys as well as flora and fauna abound. Nature intertwines with culture and history, making Grevena an attractive destination for tourists.

Hungary: Hortobágy

Hortobágy is not only part of Europe’s Great Plains region, the largest uninterrupted natural grassland in Europe, but also the place of herdsmen who have preserved a very ancient way of life and where both the diversity of species and habitat have been preserved.

Ireland: Carlingford and the Cooley Peninsula

Resounding legends, myths and folk tales await travellers to Ireland’s east-coast town of Carlingford. The place is known for its Lough whose sweeping backdrop of Slieve Foye – the highest mountain in County Louth – gives it unrivalled natural appeal.

Italy: Comune di Carinaldo

The “Marches” region of Corinaldo represents a vivid example of Italian garden landscapes with neatly marked out fields and cultivated meadows. The town has preserved valuable collections of art works as well as a network of more than seventy still flourishing historic theatres.

Latvia: Latgalian potters, masters of clay, see also special link

Rēzekne is a city in the heart of the Latgale region in eastern Latvia famous for its pottery traditions. Rēzekne has been a centre of spirituality, culture and education. It is defined by green fields and lakes, unpaved roads and woods.

Lithuania: Plateliai, see also special link

The borough of Plateliai that also includes Žemaitija national park, brimming with traditions, old architecture, local culinary, handicrafts and the customs of that region, is especially valued for its ancient farmsteads, the folk architecture and the Shrove Tuesday Carnival.

Luxembourg: Echternach, see also special link

Echternach, “the Little Switzerland” of Luxembourg, is a town which prides itself on its cultural heritage, welcoming tourists to a centre of culture that has been attracting tourists since the middle ages and is especially famous for the dancing procession of Echternach.

Malta: Kercem, hamlet of Santa Lucija

Kercem and the hamlet of Santa Lucija organise cultural activities to foster appreciation of the inherited traditions and offer the three main festivities: a gastronomic event “Ikla tan-Nanna; the Bish-Sahha wine festival and the “Santa Lucija by Night” light festival.

Romania: Horezu depression

The Horezu depression or valley has an exceptional cultural and natural patrimony which grants this destination individuality and uniqueness. The Horezu enamelled pottery – unique to Romania through its chromatics and floral motives – is emblematic of the region.

Slovenia: The Soča valley

The Soča valley is a colorful valley rich with waterfalls, pools, canyons as well as flora and fauna. The legends in the air give it a fairy-tale atmosphere. The Soča valley river stories event promotes the cultural, ethnological and natural heritage, as well as peace.

Spain: Sierra de las Nieves

Sierra de las Nieves is an unspoilt natural paradise lying in the central part of the province of Málaga. It is a place for adventurous leisure pursuits such as horse riding, canyoning, canoeing and kayaking, eco-routes in off-road vehicles, balloon trips, trekking and hiking.

Turkey: Edirne

As the second largest capital of the Ottoman Empire, Edirne has a rich cultural heritage. The tradition of wrestling is big in Edirne and dates as far back as 1361 and the mosques, religious centres, bridges, bazaars, caravanserais and palaces all make Edirne a living museum.

(sursa: http://europa.eu/rapid/pressReleasesAction.do?reference=MEMO/08/570&format=HTML&aged=0&language=EN&guiLanguage=en)

Basics of Locking

Lock Granularity

Lock granularity refers to the level at which locks occur:
  • Row
  • Table
  • Page
  • Database

Locking at a smaller granularity, such as at the row level, increases concurrency, but more locks must be held if many rows are locked. Locking at a larger granularity, such as at the table level, reduces concurrency because locking an entire table restricts access to any part of the table by other transactions. However, in table-level locking, fewer locks must be held.
By default, SQL Server Compact Edition uses row-level locking for data pages and page-level locking for index pages.
The following table shows the resources that can be locked by SQL Server Compact Edition.

Locks Description
RID Row identifier. Used to lock a single row within a table.
PAG Data page or index page.
TAB Entire table, including all data and indexes
MD Table metadata. Used to protect the table schema
DB Database

Lock Modes

Lock modes determine how concurrent transactions can access data. SQL Server Compact Edition determines which lock mode to use based on the resources that must be locked and the operations that must be performed.
The following table describes the lock modes supported by SQL Server Compact Edition.
Lock mode Description
Shared (S) Protects a resource for read access. No other transactions can modify the data while shared (S) locks exist on the resource.
Exclusive (X) Indicates a data modification, such as an insert, an update, or a deletion. Ensures that multiple updates cannot be made to the same resource at the same time.
Update (U) Prevents a common form of deadlock. Only one transaction at a time can obtain a U lock on a resource. If the transaction modifies the resource, then the U lock is converted to an X lock.
Schema Used when an operation dependent on the schema of a table is executing. The types of schema locks are schema modification (Sch-M) and schema stability (Sch-S).
Intent Establishes a lock hierarchy. The most common types of intent lock are IS, IU, and IX. These locks indicate that a transaction is operating on some, but not all, resources lower in the hierarchy. The lower-level resources will have an S, U, or X lock.
Important:
For the default isolation level of Read Committed, a SELECT statement in SQL Server Compact Edition does not require the use of S locks to read the data. Although this is required for Microsoft SQL Server, SQL Server Compact Edition does not need the S lock to enforce Read Committed. The only lock required for a SELECT statement is Sch-S, which protects the schema while the operation executes. As a result, the SELECT statements are highly concurrent. For more information, see Transaction Isolation Level.(sursa: http://msdn.microsoft.com/en-us/library/ms172909(SQL.90).aspx)sau http://www.sqlteam.com/article/introduction-to-locking-in-sql-server  (cu exemple)

 

sau http://msdn.microsoft.com/en-us/library/ms175519.aspx

SET TRANSACTION ISOLATION LEVEL (Transact-SQL)

Controls the locking and row versioning behavior of Transact-SQL statements issued by a connection to SQL Server.

READ UNCOMMITTED
Specifies that statements can read rows that have been modified by other transactions but not yet committed. Transactions running at the READ UNCOMMITTED level do not issue shared locks to prevent other transactions from modifying data read by the current transaction.
READ UNCOMMITTED transactions are also not blocked by exclusive locks that would prevent the current transaction from reading rows that have been modified but not committed by other transactions. When this option is set, it is possible to read uncommitted modifications, which are called dirty reads. Values in the data can be changed and rows can appear or disappear in the data set before the end of the transaction. This option has the same effect as setting NOLOCK on all tables in all SELECT statements in a transaction. This is the least restrictive of the isolation levels. In SQL Server, you can also minimize locking contention while protecting transactions from dirty reads of uncommitted data modifications using either:
– The READ COMMITTED isolation level with the READ_COMMITTED_SNAPSHOT database option set to ON.
– The SNAPSHOT isolation level.

READ COMMITTED
Specifies that statements cannot read data that has been modified but not committed by other transactions. This prevents dirty reads. Data can be changed by other transactions between individual statements within the current transaction, resulting in nonrepeatable reads or phantom data. This option is the SQL Server default.
The behavior of READ COMMITTED depends on the setting of the READ_COMMITTED_SNAPSHOT database option:
– If READ_COMMITTED_SNAPSHOT is set to OFF (the default), the Database Engine uses shared locks to prevent other transactions from modifying rows while the current transaction is running a read operation. The shared locks also block the statement from reading rows modified by other transactions until the other transaction is completed. The shared lock type determines when it will be released. Row locks are released before the next row is processed. Page locks are released when the next page is read, and table locks are released when the statement finishes.
– If READ_COMMITTED_SNAPSHOT is set to ON, the Database Engine uses row versioning to present each statement with a transactionally consistent snapshot of the data as it existed at the start of the statement. Locks are not used to protect the data from updates by other transactions.

When the READ_COMMITTED_SNAPSHOT database option is ON, you can use the ADCOMMITTEDLOCK table hint to request shared locking instead of row versioning for individual statements in transactions running at the READ COMMITTED isolation level.

REPEATABLE READ
Specifies that statements cannot read data that has been modified but not yet committed by other transactions and that no other transactions can modify data that has been read by the current transaction until the current transaction completes.
Shared locks are placed on all data read by each statement in the transaction and are held until the transaction completes. This prevents other transactions from modifying any rows that have been read by the current transaction. Other transactions can insert new rows that match the search conditions of statements issued by the current transaction. If the current transaction then retries the statement it will retrieve the new rows, which results in phantom reads. Because shared locks are held to the end of a transaction instead of being released at the end of each statement, concurrency is lower than the default READ COMMITTED isolation level. Use this option only when necessary.

SNAPSHOT
Specifies that data read by any statement in a transaction will be the transactionally consistent version of the data that existed at the start of the transaction. The transaction can only recognize data modifications that were committed before the start of the transaction. Data modifications made by other transactions after the start of the current transaction are not visible to statements executing in the current transaction. The effect is as if the statements in a transaction get a snapshot of the committed data as it existed at the start of the transaction.
Except when a database is being recovered, SNAPSHOT transactions do not request locks when reading data. SNAPSHOT transactions reading data do not block other transactions from writing data. Transactions writing data do not block SNAPSHOT transactions from reading data.
During the roll-back phase of a database recovery, SNAPSHOT transactions will request a lock if an attempt is made to read data that is locked by another transaction that is being rolled back. The SNAPSHOT transaction is blocked until that transaction has been rolled back. The lock is released immediately after it has been granted.
The ALLOW_SNAPSHOT_ISOLATION database option must be set to ON before you can start a transaction that uses the SNAPSHOT isolation level. If a transaction using the SNAPSHOT isolation level accesses data in multiple databases, ALLOW_SNAPSHOT_ISOLATION must be set to ON in each database.
A transaction cannot be set to SNAPSHOT isolation level that started with another isolation level; doing so will cause the transaction to abort. If a transaction starts in the SNAPSHOT isolation level, you can change it to another isolation level and then back to SNAPSHOT. A transaction starts the first time it accesses data.
A transaction running under SNAPSHOT isolation level can view changes made by that transaction. For example, if the transaction performs an UPDATE on a table and then issues a SELECT statement against the same table, the modified data will be included in the result set.

SERIALIZABLE
Specifies the following:
– Statements cannot read data that has been modified but not yet committed by other transactions.
– No other transactions can modify data that has been read by the current transaction until the current transaction completes.
– Other transactions cannot insert new rows with key values that would fall in the range of keys read by any statements in the current transaction until the current transaction completes.

Range locks are placed in the range of key values that match the search conditions of each statement executed in a transaction. This blocks other transactions from updating or inserting any rows that would qualify for any of the statements executed by the current transaction. This means that if any of the statements in a transaction are executed a second time, they will read the same set of rows. The range locks are held until the transaction completes. This is the most restrictive of the isolation levels because it locks entire ranges of keys and holds the locks until the transaction completes. Because concurrency is lower, use this option only when necessary. This option has the same effect as setting HOLDLOCK on all tables in all SELECT statements in a transaction.

Remarks
Only one of the isolation level options can be set at a time, and it remains set for that connection until it is explicitly changed. All read operations performed within the transaction operate under the rules for the specified isolation level unless a table hint in the FROM clause of a statement specifies different locking or versioning behavior for a table.

The transaction isolation levels define the type of locks acquired on read operations. Shared locks acquired for READ COMMITTED or REPEATABLE READ are generally row locks, although the row locks can be escalated to page or table locks if a significant number of the rows in a page or table are referenced by the read. If a row is modified by the transaction after it has been read, the transaction acquires an exclusive lock to protect that row, and the exclusive lock is retained until the transaction completes. For example, if a REPEATABLE READ transaction has a shared lock on a row, and the transaction then modifies the row, the shared row lock is converted to an exclusive row lock.

With one exception, you can switch from one isolation level to another at any time during a transaction. The exception occurs when changing from any isolation level to SNAPSHOT isolation. Doing this causes the transaction to fail and roll back. However, you can change a transaction started in SNAPSHOT isolation to any other isolation level.

When you change a transaction from one isolation level to another, resources that are read after the change are protected according to the rules of the new level. Resources that are read before the change continue to be protected according to the rules of the previous level. For example, if a transaction changed from READ COMMITTED to SERIALIZABLE, the shared locks acquired after the change are now held until the end of the transaction.

If you issue SET TRANSACTION ISOLATION LEVEL in a stored procedure or trigger, when the object returns control the isolation level is reset to the level in effect when the object was invoked. For example, if you set REPEATABLE READ in a batch, and the batch then calls a stored procedure that sets the isolation level to SERIALIZABLE, the isolation level setting reverts to REPEATABLE READ when the stored procedure returns control to the batch.

When you use sp_bindsession to bind two sessions, each session retains its isolation level setting. Using SET TRANSACTION ISOLATION LEVEL to change the isolation level setting of one session does not affect the setting of any other sessions bound to it.

SET TRANSACTION ISOLATION LEVEL takes effect at execute or run time, and not at parse time.
Optimized bulk load operations on heaps block queries that are running under the following isolation levels:
– SNAPSHOT
– READ UNCOMMITTED
– READ COMMITTED using row versioning

(sursa: http://msdn.microsoft.com/en-us/library/ms173763.aspx)

Microsoft JScript runtime error: Automation server can’t create object

Microsoft JScript runtime error: Automation server can’t create object.
Microsoft JScript runtime error: Object doesn’t support this property or method.
When you implement AJAX features in your ASP.NET page, the client’s browser got the above error messages. When you use client debugging, you further the following code:
// MicrosoftAjax.js
Function.__typeName=”Function”;Function.__class=true;
Function.createCallback=function(b,a){return function(){var e=arguments.length;if(e>0)
{var d=[];for(var c=0;cFunction.createDelegate=function(a,b){return function(){return b.apply(a,arguments)}};Function.emptyFunction=Function.emptyMethod=function(){};
Function._validateParams=function(e,c){var a;a=Function._validateParameterCount(e,c);if(a)
{a.popStackFrame();return a}
for(var b=0;bif(d.parameterArray)f+=”[“+(b-c.length+1)+”]”;a=Function._validateParameter(e[b],d,f);
if(a){a.popStackFrame();return a}}return null}

Solution:
This is because the MicrosoftAjax.js in Resource.axd is trying to create object using: xmlHttp=new ActiveXObject(“Msxml2.XMLHTTP”); But in client’s Internet Explorer, the security setting is high. Especially the “Script ActiveX controls marked safe for scripting” in Security tab is disabled. Solution: Make sure that the Internet Zone in the Security Zones settings is set to Medium.

(And Run ActiveX controls and plug-ins  – Enabled)

BEST PRACTICES Boxing and unboxing

Boxing and unboxing incur overhead, so you should avoid them when programming intensely
repetitive tasks. Boxing also occurs when you call virtual methods that a structure inherits from
System.Object, such as ToString. Follow these tips to avoid unnecessary boxing:
■ Implement type-specific versions (overloads) for procedures that accept various value types.
It is better practice to create several overloaded procedures than one that accepts an Object argument.
■ Use generics whenever possible instead of accepting Object arguments.
■ Override the ToString, Equals, and GetHash virtual members when defining structures.