Hi guys, I'm developing a mid-size VB.NET project to a company that uses SQL Server as database solution.
I,m dealing with at least 128,000 records, so i'm facing some lags when making operations on SQL server (with normal SATA HDD), so the question is: is there any noticeable upgrade in performance in such situation when using SSD drives ??

I can't buy an SDD for simply test it, so i will make researches and taking your opinions to be able to say for the client, "Yes the SSD is better, you can buy it" , I'm waiting you Geeks ;)
SSDs are much faster than SATA HDD for large amount of small reads. The seek time for SSD is 0, which means accessing and retrieving information from the database will be much faster of course. I think it would have a noticeable performance boost on when dealing with large databases.

BUT 128000 records is not a very big amount for a database! Even 1,000,000 records isn't too much for a DBMS such as SQL server. Your problem with the lag may not really be because of hardware limitations, it could be due to a poorly designed data model and table structure, always make sure your relational database design conforms to the 3rd normal form. It could be due to poorly written queries in your application. Lack of indices(indexing on some fields hugely affects the speed of retrieval of records).Not using primary keys etc...

Furthermore, it could be that the server is outdated and can't perform well.

I suggest you check out these issues first, make sure your database and queries are really optimized and later think about changing the hardware.

Check out this link, it should be helpful http://msdn.microsoft.com/en-us/library/ff647793.aspx
AymanFarhat wroteIt could be due to poorly written queries in your application. Lack of indexes(indexes on some fields hugely affects the speed of retrieval of records).Not using primary keys etc...
Ayman has nailed it right there. 128,000 records is way too little to start thinking about hardware upgrades. At work, we're working on a project that is supposed to start with 2 million records (migrated from the client's old data store) and is expected to have around 10 million records in the upcoming 2 years. We had a query that was taking around 1.5 hours the first time we wrote the query. Then once we've added the proper indexes and did some optimization, the time dropped to around 37 seconds.
AymanFarhat wrotealways make sure your relational database design conforms to the 3rd normal form
This is very important. But one thing to note is that there are certain tables that are actually nothing more than "look-up tables", i.e: they are only used to store static data that will most likely never change, such as gender, nationality, status...etc. I suggest you keep those in a proper non-relational table structure. We usually have 3 tables to cater for this scenario: ListName, ListValues, and LanguageDescriptions. The reason why we do this is to avoid having joins whose only purpose is to display data (such as male, active...etc). Accordingly, what we do is that we get all the look-ups on application start and cache it. And we have some helper class that takes the ListValueId or ListValueCode (ex: Gender_Male) and returns the corresponding localized value (Male).
3rd normal form as opposed to 2nd or 1st has a performance overhead.
Consistency at the cost of performance.
arithma wrote3rd normal form as opposed to 2nd or 1st has a performance overhead.
Consistency at the cost of performance.
Actually this is a very important point but, the way I see it is that a fully normalized structure would be really causing an effective overhead is when you're going through tens of millions of rows and thousands of rows are being returned.

De-normalization in such a case would be quite helpful. But with de-normalization comes a huge trade-off which is the existence of redundant data in the database!
I've got the point :) Thank you Guys.

I started working in this project since i'm in the uni, so the queries and the stucture of the DB is not strong, so my solution will be on the software side.
I agree with what the other said, it sounds more like a case of "this shouldn't be happening".
But then you didn't specify the nature of the records. Do you store big files? And what kind of CPU/Ram is it running on? Are you sure your problem is on the database side and not in the data-processing code of your application, for example?
SSD is a great boost in performance for random, small access. If your problem is due to huge records containing (for example) multimedia files, then your'e better off doing a RAID of magnetic hard drive.
There are also ways to fine-tune the server and optimize it for the hardware it's running on. There are also tools to benchmark the database and pinpoint your problems, normally.
are you running the queries locally or over somekind of network? I don't think an SSD will solve the problem unless you have alot of records being read and written at the same time. Banks usually purchase SAN storage solutions to handle thousands of bank transactions (which are basically thousands of queries running over a shared database at the same time). IO speed is crucial in this case. SSDs basically have similar IO speeds but come in small capacities. But for businesses, MLC SSDs are not recommended; you will need an SLC SSD in this case for more durability.

Do your research and read, read, and read some more before purchasing any IO solution. Of course shape up your database first before anything else.
get 2 hdd's and do raid 0, or a few hdds and raid 0+1
its cheaper, much more storage
RAID 0 is not for business use. The risk of losing data is just too high. RAID 0+1 or RAID 5 is much better (RAID 5 must be powered by a high end RAID controller)
raid 0+1 then, get two hdd's in raid 0 and get a 3rd single drive in raid 1, so in case your motherboard fries someday, you can still have access to the single drive
RAID 0+1 requires 4 drives not 3. the first 2 are put in RAID 0 then the other 2 are put in RAID 1 on the RAID 0 partition
its not possible to do raid 0+1 without having the raid 1 setup in raid 0?
i haven't tried it myself but it should be possible, if its really not then that sucks
Raid will not improve latency (seek time), which is usually what a database needs the most.
that is why businesses use SAS drives in a RAID 5 / RAID 10 configuration => SAN volume. SAN volumes are built on that. Combine SAN volumes with the use of high end servers as metadata controllers and you'll have a super fly storage system that costs as much as an F1 car :P
Another question guys, it's a theoretical one.
do i get a noticeable difference in term of speed when putting DB server on a physical disk and the Data on a different physical disk, and what i mean by noticeable: 3 seconds for a query instead of 4 or 5 seconds (+25%)
RAIDing is much better. Of course putting data and database on separate drives is better than having them on the same drive but if you're requesting records from the database that don't need data from the other drive, then you're still using 1 drive. RAIDing (like RAID 0, 10, 5) uses all drives whatever your query request is, so you're getting top notch performance without relying on the type of query.

Get SAS drives and RAID them (care that RAID 5 requires a high end RAID controller), you'll get the desired speed. You can go with SLC SSDs if you have enough "benjamins". SAS+RAID = Best Bang for the Buck :)