SQL Trigger

Un UPDATE pe o tabela poate afecta intre 0 si N rows. Indiferent cate row-ri sunt afectate trigger-ul se apeleaza o singura data pe UPDATE.
In trigger ai access la 2 pseudo-tabele numite “inserted” si “deleted”. Sunt 2 pseudo-tabele disponibile numai in trigger. Au aceeasi structura cu tabela pe care e def. triggerul si contine noile valori ale row-urilor updatate respectiv vechile valori.
Prin aceste 2 tabele se stie ce anume s-a updatat in statement. Ele sunt disponibile in toti trigerii doar ca in cei pe INSERT “deleted” e goala si pe DELETE “inserted” e goala.

exemplu cu “inserted”
CREATE TRIGGER [dbo].[UpdatePendingCalculationLockFlag]
ON [dbo].[PendingCalculation]
DECLARE @transactionId uniqueidentifier
DECLARE @lockFlag int

SELECT TransactionId, LockFlag FROM inserted
WHILE 1 = 1
FETCH FROM c1 INTO @transactionId, @lockFlag

IF (@lockFlag = 2)
UPDATE LoadTest_PendingCalculation
WHERE TransactionId = @transactionId

IF (@lockFlag = 3)
UPDATE LoadTest_PendingCalculation
WHERE TransactionId = @transactionId

Ce urmaresc in acest exemplu: la fiecare update care se intampla in tabela parinte (PendingCalculation) vreau sa modific ceva in tabela secundara (LoadTest_PendingCalculation). De aceea printr-un cursor parcurg eventualele randuri modificate (pseudo-tabela inserted).


How To Use SQL Server to Analyze Web Logs

Internet Information Server/Services provides a number of formats to gather data in the form of web logs. For busy sites, these text-based flat files sometime become too much of burden to review and are ignored. A better way to review the data would make these logs a better resource for administrators and web masters.
This article describes a method to import IIS logs in World Wide Web Consortium (W3C) Extended Logging format into Microsoft SQL Server to facilitate the review of the IIS log files. The techniques provided can also be altered for other log file formats.
In W3C Extended Logging format the fields are somewhat self explanatory: data and time are just what they seem; [c-ip] is the IP address of the client; [cs-method] is the HTTP method for the request that was met; [cs-uri-stem] is the document that has been requested; [cs-uri-query] is the query string that was sent as part of the request being logged; [sc-status] is the status code returned by the server; [sc-bytes] is the number of bytes that have been returned to the user; [time-taken] is the time in milliseconds that it took for the server to complete the processing of the request; [cs(Cookie)] is the cookie, or persistent data in the request; and [cs(Referer)] is the URL of the previous site visited by the user

The logs are formatted as follows:

date time c-ip cs-method cs-uri-stem cs-uri-query sc-status sc-bytes time-taken cs(User-Agent) cs(Cookie) cs(Referrer)

The header of the log files corresponds to the fields chosen in the Properties of the Web site, on the Web Site tab, and in the case of W3C Extended Logging, the Extended Properties tab. If your web logs are already in a table in Microsoft SQL Server, it is likely because of ODBC logging. However, when you are using ODBC logging the fields are not configurable. IIS Help has instructions on setting up ODBC logging, which includes using Logtemp.sql to create the table in the expected structure.
You could use Enterprise Manager to create the table, but to make it faster and to aid in the automation of the process, instead use the following script in Query Analyzer to create the table:

CREATE TABLE [dbo].[tablename] (
[date] [datetime] NULL,
[time] [datetime] NULL ,
[c-ip] [varchar] (50) NULL ,
[cs-method] [varchar] (50) NULL ,
[cs-uri-stem] [varchar] (255) NULL ,
[cs-uri-query] [varchar] (2048) NULL ,
[sc-status] [int] NULL ,
[sc-bytes] [int] NULL ,
[time-taken] [int] NULL ,
[cs(User-Agent)] [varchar] (255) NULL ,
[cs(Cookie)] [varchar] (2048) NULL ,
[cs(Referer)] [varchar] (2048) NULL
Note that some of these fields are quite large and may not be necessary for reviewing your particular log files.
Once the table has been created, you can import the data by using the Import Wizard, mapping from the *.log file to the database and table.
Using the Wizard can be tedious, so the following can be used to expedite importing the web logs:

BULK INSERT [dbo].[tablename] FROM ‘c:\weblog.log’
Note that the bulk insert will fail when it encounters lines that start with “#”. For web logs, this includes the first four lines, as well as any other instances when the server is stopped and started, since the header lines are written when the service is restarted. The following Microsoft Knowledge Base article provides a utility and source code to remove these lines and prepare the log for the bulk insert to SQL Server: http://support.microsoft.com/kb/296093/EN-US/ FILE: PrepWebLog Utility Prepares IIS Logs for SQL Bulk Insert

Description of the time-taken field in IIS 6.0 and IIS 7.0 HTTP logging

By default, IIS logs HTTP site activity by using the W3C Extended log file format. You can use IIS Manager to select the fields to include in the log file. One of these fields is the time-taken field.
The time-taken field measures the length of time that it takes for a request to be processed. The client-request time stamp is initialized when HTTP.sys receives the first byte of the request. HTTP.sys is the kernel-mode component that is responsible for HTTP logging for IIS activity. The client-request time stamp is initialized before HTTP.sys begins parsing the request. The client-request time stamp is stopped when the last IIS response send completion occurs.
Beginning in IIS 6.0, the time-taken field typically includes network time. Before HTTP.sys logs the value in the time-taken field, HTTP.sys usually waits for the client to acknowledge the last response packet send operation or HTTP.sys waits for the client to reset the underlying TCP connection. Therefore, when a large response or large responses are sent to a client over a slow network connection, the value of the time-taken field may be more than expected.
Note The value in the time-taken field does not include network time if one of the following conditions is true:

  • The response size is less than or equal to 2 KB, and the response size is from memory.
  • TCP buffering is used.

more info: http://kbalertz.com/Feedback.aspx?kbNumber=324279

Recycling an Application Pool (IIS 6.0)

Application pools are made up of a listening and routing structure in HTTP.sys and one or more ready-to-start worker processes that are waiting to process incoming requests. In worker process isolation mode, you can configure IIS to periodically restart worker processes under certain conditions. This automatic restarting of worker processes is called recycling.

Recycling an application pool causes the WWW service to shut down all running worker processes that are serving the application pool, and then start new worker processes. Whether the WWW service starts the new worker processes before it stops the existing one depends on the DisallowOverlappingRotation property in the metabase. Recycling an application pool does not alter any state in HTTP.sys or change any configuration in the metabase.


When an application pool is serviced by more than one worker process, the recycle operation is staggered in some scenarios. Only when a worker process is recycled on demand or when all of the worker processes hit their memory limits at the same time would they be restarted simultaneously.

For information about recycling an application pool on demand, see Configuring Worker Processes for Recycling.

(sursa: http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/f11b8294-cc42-4e9c-8482-6257bf3b80f2.mspx?mfr=true )

HTTP Protocol Stack (IIS 6.0)

The HTTP listener is implemented as a kernel-mode device driver called the HTTP protocol stack (HTTP.sys). IIS 6.0 uses HTTP.sys, which is part of the networking subsystem of the Windows operating system, as a core component.

Earlier versions of IIS use Windows Sockets API (Winsock), which is a user-mode component, to receive HTTP requests. By using HTTP.sys to process requests, IIS 6.0 delivers the following performance enhancements:
• Kernel-mode caching. Requests for cached responses are served without switching to user mode.
• Kernel-mode request queuing. Requests cause less overhead in context switching, because the kernel forwards requests directly to the correct worker process. If no worker process is available to accept a request, the kernel-mode request queue holds the request until a worker process picks it up.

Using HTTP.sys and the new WWW service architecture provides the following benefits:
• When a worker process fails, service is not interrupted; the failure is undetectable by the user because the kernel queues the requests while the WWW service starts a new worker process for that application pool.
• Requests are processed faster because they are routed directly from the kernel to the appropriate user-mode worker process instead of being routed between two user-mode processes.

How HTTP.sys Works
When you create a Web site, IIS registers the site with HTTP.sys, which then receives any HTTP requests for the site. HTTP.sys functions like a forwarder, sending the Web requests it receives to the request queue for the user-mode process that runs the Web site or Web application. HTTP.sys also sends responses back to the client.
Other than retrieving a stored response from its internal cache, HTTP.sys does not process the requests that it receives. Therefore, no application-specific code is ever loaded into kernel mode. As a result, bugs in application-specific code cannot affect the kernel or lead to system failures.
HTTP.sys provides the following services in IIS 6.0:
• Routing HTTP requests to the correct request queue.
• Caching of responses in kernel mode.
• Performing all text-based logging for the WWW service.
• Implementing Quality of Service (QoS) functionality, which includes connection limits, connection timeouts, queue-length limits, and bandwidth throttling.

How HTTP.sys Handles Kernel-Mode Queuing
When IIS 6.0 runs in worker process isolation mode, HTTP.sys listens for requests and queues those requests in the appropriate queue. Each request queue corresponds to one application pool. An application pool corresponds to one request queue within HTTP.sys and one or more worker processes.
When IIS 6.0 runs in IIS 5.0 isolation mode, HTTP.sys runs like it runs in worker process isolation mode, except that it routes requests to a single request queue.
If a defective application causes the user-mode worker process to terminate unexpectedly, HTTP.sys continues to accept and queue requests, provided that the WWW service is still running, queues are still available, and space remains in the queues.
When the WWW service identifies an unhealthy worker process, it starts a new worker process if outstanding requests are waiting to be serviced. Thus, although a temporary disruption occurs in user-mode request processing, an end user does not experience the failure because TCP/IP connections are maintained, and requests continue to be queued and processed.

(sursa : http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/a2a45c42-38bc-464c-a097-d7a202092a54.mspx?mfr=true )

Working with IIS 6.0 Application Pools – The Performance Tab

The Performance tab of IIS 6.0 application pool configuration properties contains settings that monitor the performance of the application pool and the worker processes running in it. These are the Performance configuration properties:

Shutdown worker processes after being idle for (time in minutes)
This setting limits the number of minutes that a worker process can remain idle before it will be shut down by the system.
Limit the kernel request queue (number of requests)
This setting limits the number of requests that can be queued for this application pool before 503 HTTP error messages are returned to users.
• Enable CPU monitoring
This setting activates CPU monitoring and enables the next three settings to be used.
• Maximum CPU use (percentage)
This setting specified the maximum CPU usage allowed before an action is taken.
• Refresh CPU usage number (in minutes)
This setting specifies the intervals at which the CPU usage should be checked.
• Action performed when CPU usage exceeds maximum CPU use
This setting specifies the action performed when the CPU usage is found to have exceeded the maximum CPU use. Two options are available:
1. Take No Action – IIS logs the CPU maximum usage event in the System event log but takes no corrective action.
2. Shutdown – IIS logs the event and requests that the application pool’s worker processes be recycled, based on the Shutdown Time Limit set on the Health tab.

• Maximum number of worker processes
This setting specifies the maximum number of worker processes that will be created to handle this application pool.

Notes on Shutting Down Idle Worker Processes
Although worker processes start on demand based on incoming requests and thus resources are allocated only when necessary, worker processes don’t free up the resources they use until they are shut down. In the default configuration, worker processes are shut down after they have been idle for 20 minutes. This ensures that any physical or virtual memory used by the worker process is made available to other processes running on the server, which is especially important if the server is busy.
Shutting down idle worker processes is a good idea in most instances, and if system resources are at a premium you might even want idle processes shut down sooner than 20 minutes. For example, on a moderately busy server with many configured sites and applications where there are intermittent resource issues, reducing the idle time-out could resolve the problems of resource availability.
Keep in mind though that shutting down idle worker processes can have unintended consequences. For example, on a dedicated server with ample memory and resources, shutting down idle worker processes clears cached components out of memory. These components must be reloaded into memory when the worker process starts and requires them, which might make the application seem unresponsive or sluggish.

Notes on Limiting Request Queues
When hundreds or thousands of new requests pour into an application pool’s request queue, the IIS server can become overloaded and overwhelmed. To prevent this from occurring, you can limit the length of the application pool request queue. Once a queue limit is set, IIS checks the queue size each time before adding a new request to the queue. If the queue limit has been reached, IIS rejects the request and sends the client an HTTP Error 503: Service Unavailable message.
Requests that are already queued remain queued even if you change the queue limit to a value that is less than the current queue length. The only consequence here would be that new requests wouldn’t be added to the queue until the current queue length is less than the queue limit.
The default limit for an application pool is 4000 requests. On a moderately sized server with a few application pools configured, this might be a good value. However, on a server with multiple CPUs and lots of RAM, this value might be too low. On a server with limited resources or many application pools configured, this value might be too high. Here you might want to use a formula of Memory Size in Megabytes x Number of CPUs x 10 divided by Number of Configured Application Pools to determine what the size of the average request queue should be. This is meant to be a guideline to give you a starting point for consideration and not an absolute rule. For example, on a server with two CPUs, 1024 MB of RAM, and twenty configured application pools, the size of the average request queue limit would be around 1,000 requests. You might have some application pools configured with request queue limits of 750 and others with request queue limits of 1,250. However, if the same server had only one configured application pool, you probably wouldn’t configure a request queue limit of 10,000.
Typically, you’ll want to set a value to at least 90 percent. However, to ensure that worker processes are recycled only when they’re blocking other processes, you should set the value to 100 percent.

Notes on CPU Monitoring
Typically, you’ll want to set a value to at least 90 percent. However, to ensure that worker processes are recycled only when they’re blocking other processes, you should set the value to 100 percent.
In most cases you won’t want to check the CPU usage more frequently than every five minutes. If you monitor CPU usage more frequently, you might waste resources that could be better used by other processes.

Notes on Configuring Multiple Worker Processes for Application Pools
Multiple worker processes running in their own context can share responsibility for handling requests for an application pool. This configuration setting is referred to as a Web Garden. When you define a Web Garden, each new request is assigned to a worker process according to the round-robin load balancing technique.
If a single application is placed in an application pool serviced by multiple worker processes, any and all worker processes available will handle requests queued for the application. This is a multiple-worker-process-to-single-application configuration, and it is best used when you want to improve the application’s request handling performance and reduce any possible contention for resources with other applications. In this care the application might have heavy usage during peak periods and moderate-to-heavy usage during other times, or individuals using the application might have specific performance expectations that must be met.
If multiple applications are placed in an application pool serviced by multiple worker processes, any and all worker processes available handle the requests queued for any applicable application. This is a multiple-worker-process-to-multiple-application configuration, and it is best used when you want to improve request handling performance and reduce resource contention for multiple applications but don’t want to dedicate resources to any single application. In this case the various applications in the application pool might have different peak usage periods or might have varying resource needs.
It’s important to note that worker processes aren’t started automatically and don’t use resources until they are needed. Rather, they are started as necessary to meet the demand based on incoming requests. For example, if you configure a maximum of five worker processes for an application pool, there may be at any given time from zero to five worker processes running in support of applications placed in that application pool.
You also need to keep in mind that when you assign multiple worker processes to a busy application pool that each worker process sues server resources when it’s started and might affect the performance or applications in other application pools. Adding worker processes won’t resolve latency issues due to network communications or bandwidth, and it can reduce the time it takes to process requests only if those requests were queued and waiting and not being actively processed. A poorly engineered application will still respond poorly, and at some point you’d need to look at optimizing the application code for efficiency and timeliness.

How ASP.NET Works with IIS 6.0 (IIS 6.0)

How ASP.NET Works with IIS 6.0 (IIS 6.0)

If you run your ASP.NET applications in IIS 6.0, you obtain a significant advantage over running your ASP.NET applications in IIS 5.0. For example, in IIS 5.0, you cannot isolate individual applications into their own worker processes; hence, if one ASP.NET application fails, it can affect all the other applications. IIS 6.0, however, provides a new architecture that allows you to isolate your ASP.NET applications without incurring a performance penalty.

Before you run ASP.NET applications on IIS 6.0, be sure that you understand how ASP.NET works with the two IIS 6.0 isolation modes. When running in IIS 5.0 isolation mode, ASP.NET uses its own request processing model and configuration settings in the Machine.config file — just as it does when ASP.NET runs on IIS 5.x. When running in worker process isolation mode, ASP.NET disables its own request processing model and uses the new worker process architecture of IIS 6.0.

Important: The version of ASP.NET that is included with WindowsServer2003 is not available with the Microsoft® Windows® XP 64-Bit Edition and the 64-bit versions of Windows Server 2003. For more information, see Features unavailable on 64-bit versions of the Windows Server 2003 family in Help and Support Center for Windows Server 2003.

ASP.NET Request Processing Model
When ASP.NET is enabled on a server running Windows Server 2003, the ASP.NET request processing model depends on the application isolation mode in which IIS 6.0 is running. The preferred application isolation mode for IIS 6.0 is worker process isolation mode, which enables you to use application pools, worker process recycling, and application pool health monitoring. When IIS 6.0 is running in worker process isolation mode, ASP.NET uses the IIS 6.0 request processing model settings.

In almost all situations, if your ASP.NET applications are running on Windows Server 2003, configure them to run in worker process isolation mode. However, if your application has compatibility issues in this mode — for example, if your application depends upon read raw ISAPI data filters — you can change your configuration to run IIS 6.0 in IIS 5.0 isolation mode.

If you configure IIS 6.0 to run in IIS 5.0 isolation mode, ASP.NET runs in its own request processing model, Aspnet_wp.exe, and uses its own configuration settings for process configuration, which are stored in the Machine.config configuration file. When your server is running in IIS 5.0 isolation mode, the ASP.NET ISAPI extension implements a request processing model that is similar to worker process isolation mode and contains worker process management capabilities similar to those provided by the WWW service. ASP.NET also provides recycling, health monitoring, and processor affinity.

When ASP.NET is installed on servers running Windows XP or Windows 2000 Server, ASP.NET runs in IIS 5.1 or IIS 5.0, respectively, and runs in the Aspnet_wp.exe process by default. The ASP.NET request processing model provides process recycling, health detection, and support for affinities between worker processes and particular CPUs on a server with multiple CPUs.
When Different Versions of ASP.NET Share the Same Application Pool

When multiple versions of the .NET Framework are installed on a computer that uses IIS 6.0, you might encounter the following error message in the Application Event log:

Note: It is not possible to run different versions of ASP.NET in the same IIS process. Please use the IIS Administration Tool to reconfigure your server to run the application in a separate process.

This error occurs when more than one version of ASP.NET is configured to run in the same process. Different versions of the .NET Framework and run time cannot coexist side by side within the same process. Therefore, an ASP.NET application that uses one version of the run time must not share a process with an application that uses a different version. This error commonly occurs when two or more applications are mapped to different versions of ASP.NET but share the same application pool.

(sursa http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/f1a39358-5ab8-4dc1-a60d-eedea7360780.mspx?mfr=true )

Alternative to Cursos

A monthly corporate reports which generates around 30 different KPIs (Key Performance Indicators). Each KPI is being generated using a different query from a number of joined tables in a SQL Server database.

In an attempt to automate the report generation process, I’ve created a table in my database, in which I have stored the variable parts in all the 30 queries for the KPIs.

The table is comprised of the following columns: (This is just a demo table)

KPI_Name     Criteria_1     Criteria_2     Criteria_3
=======     ========     ========     =======
KPI_1          XYZ             ABC               KLM
KPI_2          QWE             GHJ               MNO
KPI_3          ……..             …….               ……..

The query for KPI_1 is:
Select sum(Sales) from myTable
WHERE 1st_Criteria = ‘XYZ’ and 2nd_Criteria = ‘ABC’ and 3rd_Criteria = ‘KLM’

The query for KPI_2 is:
Select sum(Sales) from myTable
WHERE 1st_Criteria = ‘QWE’ and 2nd_Criteria = ‘GHJ’ and 3rd_Criteria = ‘MNO’

and so on and so forth.

The way I plan to generate the report is through a stored procedure, in which I will read the entire contents of my Criteria Table into a Cursor, and then construct my select statement in dynamic SQL:

DECLARE myCursor CURSOR FOR SELECT KPI_Name, Criteria_1, Criteria_2, Criteria_3
FROM myTable

OPEN myCursor
INTO @KPI_Name, @Criteria_1, @Criteria_2, @Criteria_3


Declare @SQL as varchar(200)
Set @SQL = ‘Insert into Output_Table (KPI_Value) Select sum(Sales) from myTable WHERE
1st_Criteria = ‘ + @Criteria_1 +
and 2nd_Criteria = ‘ + @Criteria_2 +
and 3rd_Criteria = ‘ @Criteria_3
exec sp_executesql @SQL

Insert into Output_Table (KPI_Name, KPI_Value)
Select k.KPI_Name, sum(t.Sales)
from myTable t
JOIN KPItable k
ON t.1st_Criteria = k.Criteria_1
and t.2nd_Criteria = k.Criteria_2
and t.3rd_Criteria = k.Criteria_3

Removing Duplicate Records using SQL Server 2005

Duplicate records can occur numerous ways, such as loading source files too many times, keying the same data more than once, or from just bad database coding. Having a primary key on your table (and you always should have one) can will in the removal of the duplicate records, but even a primary key it is never a fun task to have handed to you to complete.

I will demonstrate how you can use a common-table expression (CTE) in 2005 to easily remove duplicate entries from a table:

;WITH PendingCalculationCTE(EmployeeId, Start, Finish, [Timestamp], TransactionId, LockFlag, Ranking)
SELECT EmployeeId, Start, Finish, [Timestamp], TransactionId, LockFlag,
Ranking = DENSE_RANK()
OVER(PARTITION BY EmployeeId, Start, Finish, [Timestamp], TransactionId, LockFlag
FROM PendingCalculation

DELETE FROM PendingCalculationCTE WHERE Ranking > 1

 The script above defines my CTE. I am using a windowing function named DENSE_RANK to group the records together based on the EmployeeId, Start, Finish, [Timestamp], TransactionId, LockFlag fields, and assign them a sequential value randomly. This means that if I have two records with the exact same EmployeeId, Start, Finish, [Timestamp], TransactionId, LockFlag values, the first record will be ranked as 1, the second as 2, and so on.
Because a CTE acts as a virtual table, I am able to process data modification statements against it, and the underlying table will be affected. In this case, I am removing any record from the PendingCalculationCTE that is ranked higher than 1. This will remove all of my duplicate records.

DENSE_RANK – Returns the rank of rows within the partition of a result set, without any gaps in the ranking. The rank of a row is one plus the number of distinct ranks that come before the row in question.

How to Find and Check Number of Connections to a Server

Whenever a client connects to a server via network, a connection is established and opened on the system. On a busy high load server, the number of connections connected to the server can be run into large amount till hundreds if not thousands. Find out and get a list of connections on the server by each node, client or IP address is useful for system scaling planning, and in most cases, detect and determine whether a web server is under DoS or DDoS attack (Distributed Denial of Service), where an IP sends large amount of connections to the server. To check connection numbers on the server, administrators and webmasters can make use of netstat command.

Below is some of the example a typically use command syntax for ‘netstat’ to check and show the number of connections a server has. Users can also use ‘man netstat’ command to get detailed netstat help and manual where there are lots of configurable options and flags to get meaningful lists and results.

netstat -na
Display all active Internet connections to the servers and only established connections are included.

netstat -an | grep :80 | sort

Show only active Internet connections to the server at port 80 and sort the results. Useful in detecting single flood by allowing users to recognize many connections coming from one IP.

netstat -n -p|grep SYN_REC | wc -l
Let users know how many active SYNC_REC are occurring and happening on the server. The number should be pretty low, preferably less than 5. On DoS attack incident or mail bombed, the number can jump to twins. However, the value always depends on system, so a high value may be average in another server.

netstat -n -p | grep SYN_REC | sort -u
List out the all IP addresses involved instead of just count.

netstat -n -p | grep SYN_REC | awk ‘{print $5}’ | awk -F: ‘{print $1}’
List all the unique IP addresses of the node that are sending SYN_REC connection status.

netstat -ntu | awk ‘{print $5}’ | cut -d: -f1 | sort | uniq -c | sort -n
Use netstat command to calculate and count the number of connections each IP address makes to the server.

netstat -anp |grep ‘tcp\|udp’ | awk ‘{print $5}’ | cut -d: -f1 | sort | uniq -c | sort -n
List count of number of connections the IPs are connected to the server using TCP or UDP protocol.

netstat -ntu | grep ESTAB | awk ‘{print $5}’ | cut -d: -f1 | sort | uniq -c | sort -nr
Check on ESTABLISHED connections instead of all connections, and displays the connections count for each IP.

netstat -plan|grep :80|awk {‘print $5’}|cut -d: -f 1|sort|uniq -c|sort -nk 1
Show and list IP address and its connection count that connect to port 80 on the server. Port 80 is used mainly by HTTP web page request.

(sursa: http://www.mydigitallife.info/2007/12/13/how-to-find-and-check-number-of-connections-to-a-server/)