Thursday, 23 August 2012

More Helpful Commands in Linux

A backdrop of stars

  • Difficulty: Easy
  • Application: KStars
You may already have played with KStars, but how about creating a KStars backdrop image that's updated every time you start up?
KStars can be run with the --dump switch, which dumps out an image from your startup settings, but doesn't load the GUI at all. You can create a script to run this and generate a desktop image, which will change every day (or you can just use this method to generate images).
Run KStars like this:
kstars --dump --width 1024 --height 768 --filename = ~/kstarsback.png
You can add this to a script in your ~/.kde/Autostart folder to be run at startup. Find the file in Konqueror, drag it to the desktop and select 'Set as wallpaper' to use it as a randomly generated backdrop.

Open an SVG directly

  • Difficulty: Easy
  • Application: Inkscape
You can run Inkscape from a shell and immediately edit a graphic directly from a URL. Just type:
inkscape http://www.somehost.com/graphic.svg
Remember to save it as something else though!

Read More....

How to cancel right click context menu in jQuery

Hi there guys! While I was working in the next tutorial I have been encountered with a problem: How can I block / cancel the context menu when I do right click on the screen?
There are a lot of examples and scripts in javascript, but one more time jQuery makes it easy! Here you have the code:
  1. $(document).ready(function(){  
  2.     $(document).bind("contextmenu",function(e){  
  3.         return false;  
  4.     });  
  5. });  
This code works in: Firefox, Internet Explorer 6 & 7, Opera, Safari & Chrome.

Wednesday, 22 August 2012

Old tricks for new browsers – a talk at jQuery UK 2012

jQuery’s revolutionary new way of looking at web design was based on two main things: accessing the document via CSS selectors rather than the unwieldy DOM methods and chaining of JavaScript commands. jQuery then continued to make event handling and Ajax interactions easier and implemented the Easing equations to allow for slick and beautiful animations.
However, this simplicity came with a prize: developers seem to forget a few very simple techniques that allow you to write very terse and simple to understand JavaScripts that don’t rely on jQuery. Amongst others, the most powerful ones are event delegation and assigning classes to parent elements and leave the main work to CSS.

Event delegation

Event Delegation means that instead of applying an event handler to each of the child elements in an element, you assign one handler to the parent element and let the browser do the rest for you. Events bubble up the DOM of a document and happen on the element you want to get and each of its parent elements. That way all you have to do is to compare with the target of the event to get the one you want to access. Say you have a to-do list in your document. All the HTML you need is:
<ul id="todo">

  <li>Go round Mum's</li>

  <li>Get Liz back</li>

  <li>Sort life out!</li>

</ul>
In order to add event handlers to these list items, in jQuery beginners are tempted to do a $('#todo li').click(function(ev){...}); or – even worse – add a class to each list item and then access these. If you use event delegation all you need in JavaScript is:
document.querySelector('#todo').addEventListener( 'click',

  function( ev ) {

    var t = ev.target;

    if ( t.tagName === 'LI' ) {

      alert( t + t.innerHTML );

      ev.preventDefault();

    }

}, false);
Newer browsers have a querySelector and querySelectorAll method (see support here) that gives you access to DOM elements via CSS selectors – something we learned from jQuery. We use this here to access the to-do list. Then we apply an event listener for click to the list.
We read out which element has been clicked with ev.target and compare its tagName to LI (this property is always uppercase). This means we will never execute the rest of the code when the user for example clicks on the list itself. We call preventDefault() to tell the browser not to do anything – we now take over.

Read More.........

Top 7 Tricks to Rank in Google Places

If you’ve worked with Google in the past you’ll know just how hard it is to rank for Google Places for many searches. It’s also apparent how important Google Places have become – they often take up the top parts of the page for a search term. So even if you rank first for a certain keyword, you could see three to six Google Places ranking above you. The question is, what tricks can you use to secure your spot?

1. Set Up

First of all you have to set up your business page, which can be done by completing your profile on Google Places. All you need is a Google Account which is free and easy to set up, then onto the more interesting stuff.

2. Citations

If you don’t know what a citation is (you should if you’re reading this), it is basically a mention of your business on another site. As close as you can get to the exact match of your Google Places listing the better. So if you have your link, description, contact information or anything else, make sure it matches to your Google Places page. This will also help your organic rankings as well – it serves a dual purpose.

3. Get Listed

This is important to get the best citations. They can come from pages such as Yahoo, Superpages, Insiderpages, Angie’s List and many more, but these are the ones that are regarded as giving you the most juice. This will allow you to give Google all the relevant information about your business. You will want to make sure that your details are comprehensive because filling out as many fields as possible can really help with your rankings for all different kinds of search phrases.

Read More.....

5 Ways on How to be on top of Google Search

If you’re not following these 5 important ways to rank well on Google Search Results, you’re on a very huge lack.

Are your pages not showing up on Google’s Top Search Results? These are some possible reasons why your site is not performing well on Search Engines especially Google.

1. Your articles are mostly outdated
Most articles written and published online were having short-term lifespan. Some of them last only for a day or two. If you are just new in marketing your website and wanting to be on top google ranking, these kinds of articles are not good to start with. Try to think of a topic on a certain niche that has long-term lifespan which people will search for a solution not an information. Stay away in having news articles because news articles has very short lifespan.

2. Your website doesn’t speak the Search Engine’s Language
Major Search Engines like Google can’t read java scripts, and flash objects. So, if you use these to build your website, Google just can’t get it and those things makes your website having irrelevant contents and you’ll find it hard to be on Google top searches results. And lots of website developers insist on using java script and flash for their website. However, these things can be done in a search engine friendly way using HTML 5.0 but it requires good or even great skills in Web Development.

Read More......

Tuesday, 21 August 2012

Top Secret: 4 SEO Tricks Google Doesn’t Want You to Know

We all know that SEO matters. What we don’t know, however, is what Google is hiding. Continue reading to learn four secrets that search engines are desperate to keep under wraps.
pic12 Top Secret: 4 SEO Tricks Google Doesn’t Want You to Know
Good morning, online business owner. Your mission, should you choose to accept it, is to conquer search engines, namely Google, to make your site rank higher than your competition.
Here’s your top secret dossier for this not-so mission impossible:

Secret #1: Meta tags and descriptions are Google garbage

Back in the old days (circa 2000), meta tags, particularly meta keywords and meta descriptions, were draped in gold. These pieces of HTML told search engines exactly what a site was about, which led to high rankings for some pretty crummy sites.
As search algorithms became more complex, the relevance of meta tags quickly faded. In fact, Google now completely ignores your meta description and doesn’t care about your meta keywords, either.
Thus, don’t spend too much time worrying about your meta tags; Google doesn’t.
Caveat: Although your title tag isn’t technically a meta tag, it’s still important to Google. And while Google doesn’t care about your meta descriptions, it’s still helpful to your readers.

Secret #2: Links help, but powerful links are crucial

It’s no secret that link building is an important part of any SEO strategy – but what Google isn’t telling you is this: the real trick is getting powerful links.
For example, if you work to get 100 links to your site from a bunch of no-name bloggers, you’ve definitely done yourself an SEO favor. But if you secure just one link from a powerful site like The Huffington Post, you’ve got one heck of a boost.
In other words, it’s all about quality, not quantity.
Thus, continue to work to gather as many quality links as you can instead of fighting for every link possible. (Trust me, Google likes to watch you struggle here.)

Read More.........

Look what Stella brought to CentOS 6.3, Desktop OS based on Centos

There is a new Linux distribution released almost every week, sometimes, even every day. The latest is one called Stella, and the first version is Stella 6.3. Stella is a desktop-focused remix of CentOS, and Stella 6.3 is based on CentOS 6.3.
If you are familiar with CentOS, you know that out of the box, it is not really designed as a desktop distribution. Stella changes all that, as it is primarily aimed at desktop users, while retaining the core enterprise features and capabilities of CentOS.
And you can see that just by looking at the package manager. The package categories tell you that everything you can find in CentOS is also available in Stella. Plus desktop applications that you will not find in any default installation of CentOS. For example, an application listed in the screen shot below, is ROSA Media Player (ROMP), the default media player in ROSA Desktop, a distribution based on Mandriva Linux.
Stalla Package Manager
Because it is loaded with desktop applications and media codecs not available in CentOS, you can play most audio and video file formats out of the box. Here it shows a favorite online video playing in Firefox.
Stella Video Player
The next few screen shots show what the desktop looks like and some of the applications accessible from the menu. This one shows installed Internet applications.
Stella Internet Apps
Installed Office applications.
Stella Office Apps
Installed multimedia applications.
Stella Multimedia Apps
Updates manager.
Stella App Updates
The system is not without error, though.
Stella Repo Error
Administrative tools in the Preferences menu.
Stella Preferences
System-wide management applications in the Administrative menu.

Read More......

Some of Useful Commands in Linux

Access your programs remotely

  • Difficulty: Easy
  • Application: X
If you would like to lie in bed with your Linux laptop and access your applications from your Windows machine, you can do this with SSH. You first need to enable the following setting in /etc/ssh/sshd_config:
X11Forwarding yes 
We can now run The GIMP on 192.168.0.2 with:
ssh -X 192.168.0.2 gimp
 
 
Read More......

Monday, 20 August 2012

Grabbing a screenshot without X


  • Difficulty: Easy
  • Application: Shell
There are plenty of screen-capture tools, but a lot of them are based on X. This leads to a problem when running an X application would interfere with the application you wanted to grab - perhaps a game or even a Linux installer. If you use the venerable ImageMagick import command though, you can grab from an X session via the console. Simply go to a virtual terminal (Ctrl+Alt+F1 for example) and enter the following:
chvt 7; sleep 2; import -display :0.0 -window root sshot1.png; chvt 1;
The chvt command changes the virtual terminal, and the sleep command gives it a while to redraw the screen. The import command then captures the whole display and saves it to a file before the final chvt command sticks you back in the virtual terminal again. Make sure you type the whole command on one line.
This can even work on Linux installers, many of which leave a console running in the background - just load up a floppy/CD with import and the few libraries it requires for a first-rate run-anywhere screen grabber.

Uptime on your hands


  • Difficulty: Expert
  • Application: Perl
In computing, wasted resources are resources that could be better spent helping you. Why not run a process that updates the titlebar of your terminal with the current load average in real-time, regardless of what else you're running?
Save this as a script called tl, and save it to your ~/bin directory:
#!/usr/bin/perl -w

use strict;
$|++;

my $host=`/bin/hostname`;
chomp $host;

while(1) {

open(LOAD,"/proc/loadavg") || die "Couldn't open /proc/loadavg: $!\n";

my @load=split(/ /,<LOAD>);
close(LOAD);


print "$host: $load[0] $load[1] $load[2] at ", scalar(localtime);
print "\007";

sleep 2;
}
When you'd like to have your titlebar replaced with the name, load average, and current time of the machine you're logged into, just run tl&. It will happily go on running in the background, even if you're running an interactive program like Vim.

Favour str_replace() over ereg_replace() and preg_replace()


Str Replace
Speed tests show that str_replace() is 61% faster.
In terms of efficiency, str_replace() 10 is much more efficient than regular expressions at replacing strings. In fact, according to Making the Web, str_replace() is 61% more efficient than regular expressions like ereg_replace() 11 and preg_replace() 12.
If you’re using regular expressions, then ereg_replace() and preg_replace() will be much faster than str_replace().

Shortcut the else in PHP

Shortcut the else
It should be noted that tips 3 and 4 both might make the code slightly less readable. The emphasis for these tips is on speed and performance. If you’d rather not sacrifice readability, then you might want to skip them.
Anything that can be done to make the code simpler and smaller is usually a good practice. One such tip is to take the middleman out of else statements 6, so to speak. Christian Montoya has an excellent example 7 of conserving characters with shorter else statements.
Usual else statement:
1if( this condition )
2{
3$x = 5;
4}
5else
6{
7$x = 10;
8}
If the $x is going to be 10 by default, just start with 10. No need to bother typing the else at all.
1$x = 10;
2if( this condition )
3{
4$x = 5;
5}
While it may not seem like a huge difference in the space saved in the code, if there are a lot of else statements in your programming, it will definitely add up.

Know the Difference Between Comparison Operators


Equality Operators 4
PHP’s list of comparison operators.
Comparison operators are a huge part of PHP, and some programmers may not be as well-versed in their differences as they ought. In fact, an article at I/O reader 5 states that many PHP developers can’t tell the differences right away between comparison operators. Tsk tsk.
These are extremely useful and most PHPers can’t tell the difference between == and ===. Essentially, == looks for equality, and by that PHP will generally try to coerce data into similar formats, eg: 1 == ‘1′ (true), whereas === looks for identity: 1 === ‘1′ (false). The usefulness of these operators should be immediately recognized for common functions such as strpos(). Since zero in PHP is analogous to FALSE it means that without this operator there would be no way to tell from the result of strpos() if something is at the beginning of a string or if strpos() failed to find anything. Obviously this has many applications elsewhere where returning zero is not equivalent to FALSE.
Just to be clear, == looks for equality, and === looks for identity. You can see a list of the comparison operators on the PHP.net website.

Faster Hard drives


  • Difficulty: Expert
  • Application: hdparm
You may know that the hdparm tool can be used to speed test your disk and change a few settings. It can also be used to optimise drive performance, and turn on some features that may not be enabled by default. Before we start though, be warned that changing drive options can cause data corruption, so back up all your important data first. Testing speed is done with:
hdparm -Tt /dev/hda
You'll see something like:
/dev/hda:
Timing buffer-cache reads:   128 MB in  1.64 seconds =78.05 MB/sec
Timing buffered disk reads:  64 MB in 18.56 seconds = 3.45MB/sec
Now we can try speeding it up. To find out which options your drive is currently set to use, just pass hdparm the device name:
hdparm /dev/hda
 /dev/hda:
 multcount    =  16 (on)
 I/O support  =  0 (default 16-bit)
 unmaskirq    =  0 (off)
 using_dma    =  0 (off)
 keepsettings =  0 (off)
 readonly     =  0 (off)
 readahead    =  8 (on)
 geometry     = 40395/16/63, sectors = 40718160, start = 0
This is a fairly default setting. Most distros will opt for safe options that will work with most hardware. To get more speed, you may want to enable dma mode, and certainly adjust I/O support. Most modern computers support mode 3, which is a 32-bit transfer mode that can nearly double throughput. You might want to try
hdparm -c3 -d1/dev/hda
Then rerun the speed check to see the difference. Check out the modes your hardware will support, and the hdparm man pages for how to set them.

Unclog open ports


  • Difficulty: Intermediate
  • Application: netstat
Generating a list of network ports that are in the Listen state on a Linux server is simple with netstat:
root@catlin:~# netstat -lnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name 
tcp 0 0 0.0.0.0:5280 0.0.0.0:* LISTEN 698/perl 
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 217/httpd 
tcp 0 0 10.42.3.2:53 0.0.0.0:* LISTEN 220/named 
tcp 0 0 10.42.4.6:53 0.0.0.0:* LISTEN 220/named 
tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 220/named 
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 200/sshd 
udp 0 0 0.0.0.0:32768 0.0.0.0:* 220/named 
udp 0 0 10.42.3.2:53 0.0.0.0:* 220/named 
udp 0 0 10.42.4.6:53 0.0.0.0:* 220/named 
udp 0 0 127.0.0.1:53 0.0.0.0:* 220/named 
udp 0 0 0.0.0.0:67 0.0.0.0:* 222/dhcpd 
raw 0 0 0.0.0.0:1 0.0.0.0:* 7 222/dhcpd
That shows you that PID 698 is a Perl process that is bound to port 5280. If you're not root, the system won't disclose which programs are running on which ports.

Wireless speed management


  • Difficulty: Intermediate
  • Application: iwconfig
The speed at which a piece of radio transmission/receiver equipment can communicate with another depends on how much signal is available. In order to maintain communications as the available signal fades, the radios need to transmit data at a slower rate. Normally, the radios attempt to work out the available signal on their own and automatically select the fastest possible speed.
In fringe areas with a barely adequate signal, packets may be needlessly lost while the radios continually renegotiate the link speed. If you can't add more antenna gain, or reposition your equipment to achieve a better enough signal, consider forcing your card to sync at a lower rate. This will mean fewer retries, and can be substantially faster than using a continually flip-flopping link. Each driver has its own method for setting the link speed. In Linux, set the link speed with iwconfig:
iwconfig eth0 rate 2M
This forces the radio to always sync at 2Mbps, even if other speeds are available. You can also set a particular speed as a ceiling, and allow the card to automatically scale to any slower speed, but go no faster. For example, you might use this on the example link above:
iwconfig eth0 rate 5.5M auto
Using the auto directive this way tells the driver to allow speeds up to 5.5Mbps, and to run slower if necessary, but will never try to sync at anything faster. To restore the card to full auto scaling, just specify auto by itself:
iwconfig eth0 rate auto
Cards can generally reach much further at 1Mbps than they can at 11Mbps. There is a difference of 12dB between the 1Mbps and 11Mbps ratings of the Orinoco card - that's four times the potential distance just by dropping the data rate!

Save battery power


  • Difficulty: Intermediate
  • Application: hdparm
You are probably familiar with using hdparm for tuning a hard drive, but it can also save battery life on your laptop, or make life quieter for you by spinning down drives.

hdparm -y /dev/hdb
hdparm -Y /dev/hdb
hdparm -S 36 /dev/hdb 
 
In order, these commands will: cause the drive to switch to Standby mode, switch to Sleep mode, and finally set the Automatic spindown timeout. This last includes a numeric variable, whose units are blocks of 5 seconds (for example, a value of 12 would equal one minute).
Incidentally, this habit of specifying spindown time in blocks of 5 seconds should really be a contender for a special user-friendliness award - there's probably some historical reason for it, but we're stumped. Write in and tell us if you happen to know where it came from!

Parallelise your build


  • Difficulty: Easy
  • Application: GCC
If you're running a multiprocessor system (SMP) with a moderate amount of RAM, you can usually see significant benefits by performing a parallel make when building code. Compared to doing serial builds when running make (as is the default), a parallel build is a vast improvement. To tell make to allow more than one child at a time while building, use the -j switch:

make -j4; make -j4 modules

Quicker emails


  • Difficulty: Easy
  • Application: KMail
Can't afford to waste three seconds locating your email client? Can't be bothered finding the mouse under all those gently rotting mountains of clutter on your desk? Whatever you are doing in KDE, you are only a few keypresses away from sending a mail. Press Alt+F2 to bring up the 'Run command' dialog. Type:

mailto:plop@ploppypants.com

Press return and KMail will automatically fire up, ready for your words of wisdom. You don't even need to fill in the entire email address. This also works for Internet addresses: try typing www.slashdot.org to launch Konqueror.

Defrag your databases


  • Difficulty: Easy
  • Application: MySQL
Whenever you change the structure of a MySQL database, or remove a lot of data from it, the files can become fragmented resulting in a loss of performance, particularly when running queries. Just remember any time you change the database to run the optimiser:

mysqlcheck -o <databasename>

You may also find it worth your while to defragment your database tables regularly if you are using VARCHAR fields: these variable-length columns are particularly prone to fragmentation.

Nautilus shortcuts


  • Difficulty: Easy
  • Application: Nautilus
Although most file managers these days are designed to be used with the mouse, it's also useful to be able to use the keyboard sometimes. Nautilus has a few keyboard shortcuts that can have you flying through files:
  • Open a location - Ctrl+L
  • Open Parent folder - Ctrl+Up
  • Arrow keys navigate around current folder.
You can also customise the file icons with 'emblems'. These are little graphical overlays that can be applied to individual files or groups. Open the Edit > Backgrounds and Emblems menu item, and drag-and-drop the images you want.

Finding the biggest files


  • Difficulty: Easy
  • Application: Shell
A common problem with computers is when you have a number of large files (such as audio/video clips) that you may want to get rid of. You can find the biggest files in the current directory with:

ls -lSrh

The "r" causes the large files to be listed at the end and the "h" gives human readable output (MB and such). You could also search for the biggest MP3/MPEGs:

ls -lSrh *.mp*

You can also look for the largest directories with:

du -kx | egrep -v "\./.+/" | sort -n

Keeping your clock in time in linux


  • Difficulty: Easy
  • Application: NTP
If you find that the clock on your computer seems to wander off the time, you can make use of a special NTP tool to ensure that you are always synchronised with the kind of accuracy that only people that wear white coats get excited about. You will need to install the ntpdate tool that is often included in the NTP package, and then you can synchronise with an NTP server:

ntpdate ntp.blueyonder.co.uk

A list of suitable NTP servers is available at www.eecis.udel.edu/~mills/ntp/clock1b.html. If you modify your boot process and scripts to include this command you can ensure that you are perfectly in time whenever you boot your computer. You could also run a cron job to update the time.

Backup your website easily


  • Difficulty: Easy
  • Application: Backups
If you want to back up a directory on a computer and only copy changed files to the backup computer instead of everything with each backup, you can use the rsync tool to do this. You will need an account on the remote computer that you are backing up from. Here is the command:

rsync -vare ssh jono@192.168.0.2:/home/jono/importantfiles/* /home/jono/backup/

Here we are backing up all of the files in /home/jono/importantfiles/ on 192.168.0.2 to /home/jono/backup on the current machine.

Running multiple X sessions


  • Difficulty: Easy
  • Application: X
If you share your Linux box with someone and you are sick of continually logging in and out, you may be relieved to know that this is not really needed. Assuming that your computer starts in graphical mode (runlevel 5), by simultaneously pressing the keys Control+Alt+F1 - you will get a login prompt. Insert your login and password and then execute:
startx -- :1
to get into your graphical environment. To go back to the previous user session, press Ctrl+Alt+F7, while to get yours back press Ctrl+Alt+F8.
You can repeat this trick: the keys F1 to F6 identify six console sessions, while F7 to F12 identify six X sessions. Caveat: although this is true in most cases, different distributions can implement this feature in a different way.

Creating Mozilla keywords


  • Difficulty: Easy
  • Application: Firefox/Mozilla
A useful feature in Konqueror is the ability to type gg onion to do a Google search based on the word onion. The same kind of functionality can be achieved in Mozilla by first clicking on Bookmarks>Manage Bookmarks and then Add a New Bookmark. Add the URL as:
http://www.google.com/search?q=%s
Now select the entry in the bookmark editor and click the Properties button. Now enter the keyword as gg (or this can be anything you choose) and the process is complete. The %s in the URL will be replaced with the text after the keyword. You can apply this hack to other kinds of sites that rely on you passing information on the URL.
Alternatively, right-click on a search field and select the menu option "Add a Keyword for this Search...". The subsequent dialog will allow you to specify the keyword to use.

Fix a wonky terminal


  • Difficulty: Easy
  • Application: bash
We've all done it - accidentally used less or cat to list a file, and ended up viewing binary instead. This usually involves all sorts of control codes that can easily screw up your terminal display. There will be beeping. There will be funny characters. There will be odd colour combinations. At the end of it, your font will be replaced with hieroglyphics and you don't know what to do. Well, bash is obviously still working, but you just can't read what's actually going on! Send the terminal an initialisation command:

reset

and all will be well again.

Replacing same text in multiple files


  • Difficulty: Intermediate
  • Application: find/Perl
If you have text you want to replace in multiple locations, there are several ways to do this. To replace the text Windows with Linux in all files in current directory called test[something] you can run this:
perl -i -pe 's/Windows/Linux/;' test*
To replace the text Windows with Linux in all text files in current directory and down you can run this:
find . -name '*.txt' -print | xargs perl -pi -e's/Windows/Linux/ig' *.txt
Or if you prefer this will also work, but only on regular files:
find -type f -name '*.txt' -print0 | xargs --null perl -pi -e 's/Windows/Linux/'
Saves a lot of time and has a high guru rating!

Check processes not run by you in linux


  • Difficulty: Expert
  • Application: bash
Imagine the scene - you get yourself ready for a quick round of Crack Attack against a colleague at the office, only to find the game drags to a halt just as you're about to beat your uppity subordinate - what could be happening to make your machine so slow? It must be some of those other users, stealing your precious CPU time with their scientific experiments, webservers or other weird, geeky things!
OK, let's list all the processes on the box not being run by you!
ps aux | grep -v `whoami`
Or, to be a little more clever, why not just list the top ten time-wasters:
ps aux  --sort=-%cpu | grep -m 11 -v `whoami` 
It is probably best to run this as root, as this will filter out most of the vital background processes. Now that you have the information, you could just kill their processes, but much more dastardly is to run xeyes on their desktop. Repeatedly!

Master on Find Command in Linux


This command is an extremely handy tool for programmers in shell scripting and various other system administrative tasks. In fact, you will save a lot of time using the find command which would have otherwise been wasted trying to find the file. With the option of imposition of various criteria whilst searching for files, find is the ideal command to look for.
There are several versions of find e.g. POSIX find, AIX find, GNU find etc. Since we are concerned with Linux, this post will be based on GNU find.

1. Using find command
$ find myFile.txt
Search for myfile.txt in the current directory.
$ find . -name myFile.txt
Search for myFile.txt in the current directory and its sub-directories.
Here, ‘.’ represents the current directory.
You can specify many places to search, for e.g.
$ find  /home  /usr .  -name “*.txt”
Search all files with .txt extension in /home, current directory and /usr.
To search files without case sensitivity, use
$find . -iname myFile.txt

2. With Wildcards
$ find /home -type f -name myFile*
Search for all the files whose filename starts with myFile in the home directory and its sub-directories.
-type f : to search for files only
$ find /home -type d -name *john
Similarly, it will search for all the directories with directory name ending with john.
$find /home -type f -name [ldt]uck
It will search for files with filenames luck or duck or tuck in tje home directory and its sub-directories.
$find . -type f -name ?uck
By introducing ‘?’ in the above example, find command searches for a 4 digit filename whose initial letter can be any character.
Check out:
$find . -type f -executable -name f*ball
$find . -type f -name f*b*
$find . -type f -writable -name *woo*
$find . -type l -readable -name *.jpg   (-type l : for link )
3. With Date / Time
Find command permits you to search for files based on
(*) last data modification time ( mtime or mmin )
(*) last access time ( atime or amin )
(*) last status changed time ( ctime or cmin)
a. mmin / mtime
$ find /home -mmin -10
Search for all the files whose data was modified less that 10 minutes ago.
$ find /home -mmin 10
Search for files whose data was modified exactly 10 minutes ago
$ find /home -mmin +10
Search for files whose data was modified more than 10 minutes ago
Similarly , if you use mtime instead of mmin, find command will search for files modified in 24 hour periods.
$ find /home -mtime -1
Search for files modified in the last 24 hrs.
b. amin / atime
$find /home -amin -10
Similarly, it will search for files that were accessed within 10 minutes ago.
$find /home -atime +10
Search for files accessed more than 10 days ago.
c. cmin / ctime
$find /home -cmin 10
It will list the files whose status (i.e. change in the ownership or access permissions) was changed exactly 10 minutes ago.
$find /home -ctime -10
Search for files whose status was changed less than 10 days ago.
Remember: You always have the option of combining these options.
$find /home -atime +2 -amin -10
(search for files that were were last accessed 2 to 10 minutes ago)
4.  –exec parameter
The -exec parameter defines what to do with the file. This is indeed a handy and an important option to learn.
$find /home -empty -exec rm {} \;
( search for empty files in home directory and its sub-directories and remove them using rm command )
$find /home -name “*.doc” -exec ls {} \;
( search for document files and list them );
$find /home -name “*.doc” -ok rm {} \;
( search for document files and remove them. But it will prompt a question mark after each file).
5. With Permissions
$find . -perm 644
(search for files with permissions 644 ie read and write permission for owner, read permission for group and other users.)
$find . -perm -644
( same as above but without regard to the presence of any extra permission bits, eg. the executable bit )
$find . -perm /444
( search for files which are readable by somebody ie. owner or group or anybody else )
Remember :      $find . -perm -440
$find . -perm -u+r,g+r
$find . -perm -u=r,g=r
All three commands search for the files which are readable by both their owner and their group.
Similarly,          $find . -perm /440
$find . -perm /u+r,g+r
$find . -perm /u=r,g=r
All three commands search for the files which are readable by either their owner or their group.
6. Operators
Listed in order of decreasing precedence:
a. ( expr )
$ find /home \( -size +200c \)
Search for files with size greater than 200 bytes (c is for bytes).
b. ! expr  :  True if expr is false
$ find /home \! -perm 644
It will not list the files or directories with permission 644.
c. expr1 expr2  :  expr2 is not evaluated if expr1 is false
$ find /home -perm 644 -size -2k
Search for files with permission 644 and size less than 2 kilobytes.
( expr1 -a expr is same as expr1 expr2
i.e. $find /home -perm 644 -a -size 2k  is same as
$find /home -perm 644 -size 2k )
d.  expr1 -o expr2 : expr2 is not evaluated if expr1 is true.
$find /home -size 6M -o -size 1G
Search for files which is 6 Megabytes or files with 1 Gigabytes.
e.  expr1 , expr2  : both expr1 and expr2 are always evaluated.
$find /home -name “*.doc” -print , -name “*.pdf” -print

7. With I/O Redirection and Pipes
$find / -size +4M > list_of_files.txt
Search for files greater than 4 Megabytes and redirect the standard output to the file list_of_files.txt.
$ find /home -size +4M | wc -l
Search for files greater than 4 Megabytes and count number of files (by counting number of lines).

8. With Users and Groups
Search for files belonging to a user
$find / -user rabi “*.doc”
Search for files belonging to a group
$find / -group fortystones “*.doc”
Search for files that do not belong to any user
$find / -nouser “*.pdf”
Search for files that do not belong to any group
$find / -nogroup “*.pdf”

9.
$find / -newer mydoc.doc
Search for files that were modified more recently than mydoc.doc.
$find / -anewer mydoc.doc
Search for files that were last accessed more recently than file mydoc.doc was modified.
$find / -cnewer mydoc.doc
Search for files’ status was last changed more recently than file mydoc.doc was modified.
$find / -used 2
Search for files that were last accessed 2 days after its status was last changed.
10. With -print, -print0, -printf
$find / -name “*.pdf” -print
(print to the standard output, followed by a newline )
$find / -name “*.pdf” -print0
(print to the standard output, followed by a null  character instead of a newline )
$find / -name “*.pdf” -printf “%g %s %p\n”
Search for pdf files and print to the standard output, its group name , size in bytes and the filename. Here \n is for new line.
For more options regarding printf, refer manual page of find.
e.g. %a : file’s last access time
%c  : file’s last status change time
%d  : file’s depth in the directory tree
%m : file’s permission bits
%t   : file’s last modification time
%u  : file’s user name etc.
Until you do not play with the find command by going through its manual page, experimenting with its options, mixing these options together, you will not be able to use this wonderful command confidently.

Basic Linux Commands

1. BASH ( Bourne-Again SHell ) is the most common shell in Linux. It’s freeware shell.
cat /etc/shells -> To find the available shells in your system
/bin/shell-name  -> For temporary changing of the shell
/bin/sh :  To change to the sh shell.
To return back to the bash shell : /bin/bash
2. To print your home directory location:
echo $HOME
3. To print colouful text
echo -e ” \ 033[31m Hello  World” Note : There is no space between the \ and 0
Output :
Hello World ( in Red )
Set foreground color:
echo -e ” \ 033[3xm” ( where x = 0 – 7 ) Note : There is no space between the \ and 0
Set background color :
echo -e ” \ 033[4ym” ( where y = 0 – 7 )
Note : There is no space between the \ and 0
4. The sort command can be used to sort lines of text files. It can be used for sorting of both numbers and characters or words.
For example if your file myfile.txt contains the following data :
Raju
Khanal
aayush
If you want to sort it the following command gives you the sorted list
sort myfile.txt
Output :
aayush
Khanal
Raju
5. The cut command is used for removing sections from each line of files. For instance consider a file myfile.txt with the following data
IIT2009009:Raju:Khanal
IIT2009010:Pankaj:Bhansali
cut -c1-10 myfile.txt
Output :
IIT2009009
IIT2009010
cut -c4,8 myfile.txt
Output:
20
20
cut -d: -f2 myfile.txt
Output:
Raju
Pankaj
6. The paste command is used for the merging the lines of files.
paste <  file1 > < file2 >
Output shows that the two files are combined line-by-line with tabs separating the two.
7. The join Command is used for joining lines of two files on a common field.
File1 data
IIT2009009 30
IIT2009010 40
File2 data :
IIT2009009 40
IIT2009010 50
join File1 File2
Output :
IIT2009009 30 40
IIT2009010 40 50
Note : Make sure the files are in sorted order.
8. uniq command is used to omit the repeated lines.
Myfile.txt data
Raju Khanal
Raju Khanal
Aayush Khanal
uniq -u myfile.txt -> For non-duplicated lines use the -u flag.
Output :
Aayush Khanal
uniq -c myfile.txt -> For counting the number of times each one appears -c flag is used.
Output :
2 Raju Khanal
1 Aayush Khanal
9. spell : It is a spell checking program which prints each misspelled word on a line of its own. When used with the -n option it gives the line numbers before lines.
spell -n myfile..txt gives the line number and the misspelled word in the myfile..txt file.
10. finger command : The finger command displays information about the system users.
11. Say you want to add a user but don’t know the command to do it. Don’t worry with the power of the Linux command line you don’t have to mug up the commands. You can use the man command with the option -k to search by keyword user for relevant commands.
Note : The man -k is equivalent to apropos command which is used to search the short manual page descriptions for keywords and display any matches.
12. The general forms of the sed command are as follows:
Substitution sed:  ‘s/regexp/replacement/g’ < file >
Deletion sed:  ‘< start >,< end > d’ < file >
sed ‘s/raju/aayush/g’ myfile.txt :
=> To change all occurrences of the word ‘raju’ to ‘aayush’ in the myfile.txt.txt filewhere “s” means substitute, and the “g” means make a global  change. If you leave off the “g” only the first occurrence on each line is changed
sed ’2,3d’ myfile.txt
=> To remove lines starting from line number 2 and ending in line number 3 including the third line
13. With the ‘awk’ command, you can substitute words from an input file’s lines for words in a template or perform calculations on numbers within a file.
Ex : Consider a file with the following data
Raju is 57 Kg and Aayush is 60 Kg.
Pankaj is 60 Kg and Pramod is 55 Kg.
awk ‘{ print $1 ” and ” $6  }’ < filename >
Output :
Raju and Aayush
Pankaj and Pramod

awk ‘{ print “Average of both are ” ( $3 + $8 ) / 2 }’ < filename >
Output :
Average of both are 58.5
Average of both are 57.5

14. expr is used for evaluating expressions.
Ex: expr 10 + 3 gives 13.
Note: For multiplication it is \* and not *.
15. bc , the , Linux calculator program can be used to perform arithmetic calculation.
For instance if you want to multiply 10 and 3, just use the bc command and then give the expression 10 * 3 to get the desired output.
Note : In this case the multiplication symbol is the traditional “*” instead of “\*” as we used in expr .
16. whatis is a command that displays the manual page descriptions. If you explore your Linux system, you will find a lot of programs and you may not know what they do.
Ex : whatis cat
Output :
cat (1) – concatenate files and print on the standard output.
Note : man -f cat gives the same output.
17. The tee command reads from standard input and write to standard output and files.
ls -l | tee dir_list => For directing the output of ls -l to dir_list file.
When used with the -a option it is used for appending ( remember the >> operator )
18. zcat will read gzipped files without needing to uncompress them first.
zcat myfile.txt.gz => To read the gzipped file myfile.txt.gz
19. The file command is used to determine the file type.
A directory known as abc if cannot be identified from its color, can be identified from the file command.

file abc
Output :
abc: directory

20. The diff command is used to compare files line by line. When used with the -y option ( side by side ) the output is obtained in two columns differentiating the two files increasing readability.
Note : The sign “|” in the output indicates that there is a difference whereas the sign “>” and “<” indicate what is left out or added.
21. The cat command provides three useful options -v for displaying non-printing characters , -t prints “^I” for each Tab in the file and -e prints a “$” at the end of each line to indicate a NULL character.

cat -vet < filename >
22. mkdir used with the -p option creates nested directories.
23. The tac command concatenates and prints files in reverse.
24. The dict command can be used to find out the meaning of any word.
dict encumbered
gives you the meaning, thesarus words and sample sentences of the word encumbered.
Note: The dict package is neccessary for the usage of the dict command.
25. The style command analyses the surface characteristics of a document giving the sentence info, word usage, sentence beginnings and so on.
26. The locate command can be used to find files by name. It is an alternative to the find command and can be really useful.
locate < filename > gives the path of that filename.

40 Basic Linux Commands

1.  Everything in Linux is a file including the hardware and even the directories.
2. # : Denotes the super(root) user
3.  $ : Denotes the normal user
4.  /root: Denotes the super user’s directory
/home: Denotes the normal user’s directory.
5.  Switching between Terminals
§  Ctrl + Alt + F1-F6: Console login
§  Ctrl + Alt + F7: GUI login
6.  The Magic Tab: Instead of typing the whole filename if the unique pattern for a particular file is given then the remaining characters need not be typed and can be obtained automatically using the Tab button.
7.   ~(Tilde): Denotes the current user’s home directory
8.   Ctrl + Z: To stop a command that is working interactively without terminating it.
9.  Ctrl + C: To stop a command that is not responding. (Cancellation).
10.  Ctrl + D: To send the EOF( End of File) signal to a command normally when you see ‘>’.
11.  Ctrl + W: To erase the text you have entered a word at a time.
12.  Up arrow key: To redisplay the last executed command. The Down arrow key can be used to print the next command used after using the Up arrow key previously.
13.  The history command can be cleared using a simple option –c (clear).
14.  cd :   The cd command can be used trickily in the following ways:
cd : To switch to the home user
cd * : To change directory to the first file in the directory (only if the first file is a directory)
cd .. : To move back a folder
cd - : To return to the last directory you were in
15.  Files starting with a dot (.) are a hidden file.
16.   To view hidden files: ls -a
17.   ls: The ls command can be use trickily in the following ways:
ls -lR : To view a long list of all the files (which includes directories) and their subdirectories recursively .
ls *.* : To view a list of all the files with extensions only.
18.   ls -ll: Gives a long list in the following format
drwxr-xr-x 2 root root 4096 2010-04-29 05:17 bin where
drwxr-xr-x : permission where d stands for directory, rwx stands for owner privilege, r-x stands for the group privilege and r-x stands for others permission respectively.
Here r stands for read, w for write and x for executable.
2=> link count
root=>owner
root=>group
4096=> directory size
2010-04-29=>date of creation
05:17=> time of creation
bin=>directory file(in blue)

The color code of the files is as follows:
Blue: Directory file
White: Normal file
Green: Executable file
Yellow: Device file
Magenta: Picture file
Cyan: link file
Red: Compressed file
File Symbol
-(Hyphen) : Normal file
d=directory
l=link file
b=Block device file
c=character device file
19.  Using the rm command: When used without any option the rm command deletes the file or directory ( option -rf) without any warning. A simple mistake like rm / somedir instead of rm /somedir can cause major chaos and delete the entire content of the /(root) directory. Hence it is always advisable to use rm command with the -i(which prompts before removal) option. Also there is no undelete option in Linux.
20.  Copying hidden files: cp .* (copies hidden files only to a new destination)
21. dpkg -l : To get a list of all the installed packages.
23. Use of ‘ > ‘ and ‘ >> ‘ : The ‘ > ‘ symbol ( input redirector sign) can be used to add content to a file when used with the cat command. Whereas ‘ >> ‘ can be used to append to a file. If the ‘ >> ‘ symbol is not used and content is added to a file using only the ‘>’ symbol the previous content of the file is deleted and replaced with the new content.
e.g: $ touch text (creates an empty file)
$ cat >text
This is text’s text. ( Save the changes to the file using Ctrl +D)
$cat >> text
This is a new text. (Ctrl + D)
Output of the file:
This is text’s text.
This is a new text.

23.  To count the number of users logged in : who |wc –l

24.  cat:  The cat command can be used to trickly in the following way:
- To count no. of lines from a file : cat <filename> |wc -l
- To count no. of words from a file : cat <filename> |wc -w
- To count no. of characters from a file : cat <filename> |wc –c

25.  To search a term that returns a pattern: cat <filename> |grep [pattern]

26.  The ‘tr’ command: Used to translate the characters of a file.
tr ‘a-z’ ‘A-Z’ <text >text1 : The command for example is used to translate all the characters from lower case to upper case of the ‘text’ file and save the changes to a new file ‘text1′.
27.  File permission using chmod: ‘chmod’ can be used directly to change the file permission of files in a simple way by giving the permission for root, user and others in a numeric form where the numeric value are as follows:
r(read-only)=>4
w(write)=>2
x(executable)=>1
e.g. chmod 754 text will change the ownership of owner to read, write and executable, that of group to read and executable and that of others to read only of the text file.
28.  more: It is a filter for paging through text one screenful at a time.
Use it with any of the commands after the pipe symbol to increase readability.
e.g. ls -ll |more
29.  cron : Daemon to execute scheduled commands. Cron enables users to schedule jobs (commands or shell scripts) to run periodically at certain times or dates.
1 * * * * echo “hi” >/dev/tty1 displays the text “hi” after every 1 minute in tty1
.—————- minute (0 – 59)
| .————- hour (0 – 23)
| | .———- day of month (1 – 31)
| | | .——- month (1 – 12) OR jan,feb,mar,apr …
| | | | .—– day of week (0 – 7) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
* * * * * command to be executed
Source of example: Wikipedia
30.  fsck: Used for file system checking. On a non-journaling file system the fsck command can take a very long time to complete. Using it with the option -c displays a progress bar which doesn’t increase the speed but lets you know how long you still have to wait for the process to complete.
e.g. fsck -C
31.  To find the path of the command: which command
e.g. which clear
32. Setting up alias: Enables a replacement of a word with another string. It is mainly used for abbreviating a system command, or for adding default arguments to a regularly used command
e.g. alias cls=’clear’ => For buffer alias of clear
33.  The du (disk usage) command can be used with the option -h to print the space occupied in human readable form. More specifically it can be used with the summation option (-s).
e.g. du -sh /home summarizes the total disk usage by the home directory in human readable form.
34.  Two or more commands can be combined with the && operator. However the succeeding command is executed if and only if the previous one is true.
e.g. ls && date lists the contents of the directory first and then gives the system date.
35.  Surfing the net in text only mode from the terminal: elinks [URL]
e.g: elinks www.google.com
Note that the elinks package has to be installed in the system.
36.  The ps command displays a great more deal of information than the kill command does.
37.  To extract a no. of lines from a file:
e.g head -n 4 abc.c is used to extract the first 4 lines of the file abc.c
e.g tail -n 4 abc.c is used to extract the last 4 lines of the file abc.c
38.  Any changes to a file might cause loss of important data unknowingly. Hence    Linux creates a file with the same name followed by ~ (Tilde) sign without the recent changes. This comes in really handy when playing with the configuration files as some sort of a backup is created.
39.   A variable can be defined with an ‘=’ operator. Now a long block of text can be assigned to the variable and brought into use repeatedly by just typing the variable name preceded by a $ sign instead of writing the whole chunk of text again and again.
e.g ldir=/home/my/Desktop/abc
cp abcd $ldir copies the file abcd to /home/my/Desktop/abc.
40. To find all the files in your home directory modified or created today:
e.g. find ~ -type f -mtime 0

Server Control View State

View state is a fancy name for ASP.NET storing some state data in a hidden input field inside the generated page. When the page is posted back to the server, the server can parse, validate, and apply this view state data back to the page's tree of controls. View state is a very powerful capability since it allows state to be persisted with the client and it requires no cookies or server memory to save this state. Many ASP.NET server controls use view state to persist settings made during interactions with elements on the page, for example, saving the current page that is being displayed when paging through data.
There are a number of drawbacks to the use of view state, however. First of all, it increases the total payload of the page both when served and when requested. There is also an additional overhead incurred when serializing or deserializing view state data that is posted back to the server. Lastly, view state increases the memory allocations on the server.
Several server controls, the most well known of which is the DataGrid, tend to make excessive use of view state, even in cases where it is not needed. The default behavior of the ViewState property is enabled, but if you don't need it, you can turn it off at the control or page level. Within a control, you simply set the EnableViewState property to false, or you can set it globally within the page using this setting:
<%@ Page EnableViewState="false" %>
If you are not doing postbacks in a page or are always regenerating the controls on a page on each request, you should disable view state at the page level.

Use Gzip Compression

While not necessarily a server performance tip (since you might see CPU utilization go up), using gzip compression can decrease the number of bytes sent by your server. This gives the perception of faster pages and also cuts down on bandwidth usage. Depending on the data sent, how well it can be compressed, and whether the client browsers support it (IIS will only send gzip compressed content to clients that support gzip compression, such as Internet Explorer 6.0 and Firefox), your server can serve more requests per second. In fact, just about any time you can decrease the amount of data returned, you will increase requests per second.
The good news is that gzip compression is built into IIS 6.0 and is much better than the gzip compression used in IIS 5.0. Unfortunately, when attempting to turn on gzip compression in IIS 6.0, you may not be able to locate the setting on the properties dialog in IIS. The IIS team built awesome gzip capabilities into the server, but neglected to include an administrative UI for enabling it. To enable gzip compression, you have to spelunk into the innards of the XML configuration settings of IIS 6.0 (which isn't for the faint of heart). By the way, the credit goes to Scott Forsyth of OrcsWeb who helped me figure this out for the www.asp.net severs hosted by OrcsWeb.
Rather than include the procedure in this article, just read the article by Brad Wilson at IIS6 Compression. There's also a Knowledge Base article on enabling compression for ASPX, available at Enable ASPX Compression in IIS. It should be noted, however, that dynamic compression and kernel caching are mutually exclusive on IIS 6.0 due to some implementation details.

Run IIS 6.0 (If Only for Kernel Caching)

If you're not running IIS 6.0 (Windows Server 2003), you're missing out on some great performance enhancements in the Microsoft Web server. In Tip 7, I talked about output caching. In IIS 5.0, a request comes through IIS and then to ASP.NET. When caching is involved, an HttpModule in ASP.NET receives the request, and returns the contents from the Cache.
If you're using IIS 6.0, there is a nice little feature called kernel caching that doesn't require any code changes to ASP.NET. When a request is output-cached by ASP.NET, the IIS kernel cache receives a copy of the cached data. When a request comes from the network driver, a kernel-level driver (no context switch to user mode) receives the request, and if cached, flushes the cached data to the response, and completes execution. This means that when you use kernel-mode caching with IIS and ASP.NET output caching, you'll see unbelievable performance results. At one point during the Visual Studio 2005 development of ASP.NET, I was the program manager responsible for ASP.NET performance. The developers did the magic, but I saw all the reports on a daily basis. The kernel mode caching results were always the most interesting. The common characteristic was network saturation by requests/responses and IIS running at about five percent CPU utilization. It was amazing! There are certainly other reasons for using IIS 6.0, but kernel mode caching is an obvious one.

Page Output Caching and Proxy Servers

ASP.NET is your presentation layer (or should be); it consists of pages, user controls, server controls (HttpHandlers and HttpModules), and the content that they generate. If you have an ASP.NET page that generates output, whether HTML, XML, images, or any other data, and you run this code on each request and it generates the same output, you have a great candidate for page output caching.
By simply adding this line to the top of your page
<%@ Page OutputCache VaryByParams="none" Duration="60" %> 
you can effectively generate the output for this page once and reuse it multiple times for up to 60 seconds, at which point the page will re-execute and the output will once be again added to the ASP.NET Cache. This behavior can also be accomplished using some lower-level programmatic APIs, too. There are several configurable settings for output caching, such as the VaryByParams attribute just described. VaryByParams just happens to be required, but allows you to specify the HTTP GET or HTTP POST parameters to vary the cache entries. For example, default.aspx?Report=1 or default.aspx?Report=2 could be output-cached by simply setting VaryByParam="Report". Additional parameters can be named by specifying a semicolon-separated list. Many people don't realize that when the Output Cache is used, the ASP.NET page also generates a set of HTTP headers that downstream caching servers, such as those used by the Microsoft Internet Security and Acceleration Server or by Akamai. When HTTP Cache headers are set, the documents can be cached on these network resources, and client requests can be satisfied without having to go back to the origin server.
Using page output caching, then, does not make your application more efficient, but it can potentially reduce the load on your server as downstream caching technology caches documents. Of course, this can only be anonymous content; once it's downstream, you won't see the requests anymore and can't perform authentication to prevent access to it.

Background Processing in ASP.NET

The path through your code should be as fast as possible, right? There may be times when you find yourself performing expensive tasks on each request or once every n requests. Sending out e-mails or parsing and validation of incoming data are just a few examples.
When tearing apart ASP.NET Forums 1.0 and rebuilding what became Community Server, we found that the code path for adding a new post was pretty slow. Each time a post was added, the application first needed to ensure that there were no duplicate posts, then it had to parse the post using a "badword" filter, parse the post for emoticons, tokenize and index the post, add the post to the moderation queue when required, validate attachments, and finally, once posted, send e-mail notifications out to any subscribers. Clearly, that's a lot of work.
It turns out that most of the time was spent in the indexing logic and sending e-mails. Indexing a post was a time-consuming operation, and it turned out that the built-in System.Web.Mail functionality would connect to an SMTP server and send the e-mails serially. As the number of subscribers to a particular post or topic area increased, it would take longer and longer to perform the AddPost function.
Indexing e-mail didn't need to happen on each request. Ideally, we wanted to batch this work together and index 25 posts at a time or send all the e-mails every five minutes. We decided to use the same code I had used to prototype database cache invalidation for what eventually got baked into Visual Studio® 2005.
The Timer class, found in the System.Threading namespace, is a wonderfully useful, but less well-known class in the .NET Framework, at least for Web developers. Once created, the Timer will invoke the specified callback on a thread from the ThreadPool at a configurable interval. This means you can set up code to execute without an incoming request to your ASP.NET application, an ideal situation for background processing. You can do work such as indexing or sending e-mail in this background process too.
There are a couple of problems with this technique, though. If your application domain unloads, the timer instance will stop firing its events. In addition, since the CLR has a hard gate on the number of threads per process, you can get into a situation on a heavily loaded server where timers may not have threads to complete on and can be somewhat delayed. ASP.NET tries to minimize the chances of this happening by reserving a certain number of free threads in the process and only using a portion of the total threads for request processing. However, if you have lots of asynchronous work, this can be an issue.