40/sec to 500/sec

Introduction

Surprised, by the title? well, this is a tour of how we cracked the scalability jinx from handling a meagre 40 records per second to 500 records per second. Beware, most of the problems we faced were straight forward, so experienced people might find this superfluous.
Contents

* 1.0 Where were we?

1.1 Memory hits the sky
1.2 Low processing rate
1.3 Data loss :-(
1.4 Mysql pulls us down
1.5 Slow Web Client

* 2.0 Road to Nirvana

2.1 Controlling memory!
2.2 Streamlining processing rate
2.3 What data loss uh-uh?
2.4 Tuning SQL Queries
2.5 Tuning database schema
2.5 Mysql helps us forge ahead!
2.6 Faster...faster Web Client

* 3.0 Bottom line

Where were we?

Initially we had a system which could scale only upto 40 records /sec. I could even recollect the discussion, about "what should be the ideal rate of records? ". Finally we decided that 40/sec was the ideal rate for a single firewall. So when we have to go out, we atleast needed to support 3 firewalls. Hence we decided that 120/sec would be the ideal rate. Based on the data from our competitor(s) we came to the conclusion that, they could support around 240/sec. We thought it was ok! as it was our first release. Because all the competitors talked about the number of firewalls he supported but not on the rate.

Memory hits the sky

Our memory was always hitting the sky even at 512MB! (OutOfMemory exception) We blamed cewolf(s) inmemory caching of the generated images.But we could not escape for long! No matter whether we connected the client or not we used to hit the sky in a couple of days max 3-4 days flat! Interestingly,this was reproducible when we sent data at very high rates(then), of around 50/sec. You guessed it right, an unlimited buffer which grows until it hits the roof.

Low processing rate

We were processing records at the rate of 40/sec. We were using bulk update of dataobject(s). But it did not give the expected speed! Because of this we started to hoard data in memory resulting in hoarding memory!

Data Loss :-(

At very high speeds we used to miss many a packet(s). We seemed to have little data loss, but that resulted in a memory hog. On some tweaking to limit the buffer size we started having a steady data loss of about 20% at very high rates.

Mysql pulls us down

We were facing a tough time when we imported a log file of about 140MB. Mysql started to hog,the machine started crawling and sometimes it even stopped responding.Above all, we started getting deadlock(s) and transaction timeout(s). Which eventually reduced the responsiveness of the system.

Slow Web Client

Here again we blamed the number of graphs we showed in a page as the bottleneck, ignoring the fact that there were many other factors that were pulling the system down. The pages used to take 30 seconds to load for a page with 6-8 graphs and tables after 4 days at Internet Data Center.

Road To Nirvana

Controlling Memory!

We tried to put a limit on the buffer size of 10,000, but it did not last for long. The major flaw in the design was that we assumed that the buffer of around 10000 would suffice, i.e we would be process records before the buffer of 10,1000 reaches. Inline with the principle "Something can go wrong it will go wrong!" it went wrong. We started loosing data. Subsesquently we decided to go with a flat file based caching, wherein the data was dumped into the flat file and would be loaded into the database using "load data infile". This was many times faster than an bulk insert via database driver. you might also want to checkout some possible optimizations with load data infile. This fixed our problem of increasing buffer size of the raw records.

The second problem we faced was the increase of cewolf(s) in memory caching mechanism. By default it used "TransientSessionStorage" which caches the image objects in memory, there seemed to be some problem in cleaning up the objects, even after the rerferences were lost! So we wrote a small "FileStorage" implementation which store the image objects in the local file. And would be served as and when the request comes in. Moreover, we also implmentated a cleanup mechanism to cleanup stale images( images older than 10mins).

Another interesting aspect we found here was that the Garbage collector had lowest priority so the objects created for each records , were hardly cleaned up. Here is a little math to explain the magnitude of the problem. Whenever we receive a log record we created ~20 objects(hashmap,tokenized strings etc) so at the rate of 500/sec for 1 second, the number of objects was 10,000(20*500*1). Due to the heavy processing Garbage collector never had a chance to cleanup the objects. So all we had to do was a minor tweak, we just assigned "null" to the object references. Voila! the garbage collector was never tortured I guess ;-)

Streamlining processing rate

The processing rate was at a meagre 40/sec that means that we could hardly withstand even a small outburst of log records! The memory control gave us some solace,but the actual problem was with the application of the alert filters over the records. We had around 20 properties for each record, we used to search for all the properties. We changed the implementation to match for those properties we had criteria for! Moreover, we also had a memory leak in the alert filter processing. We maintained a queue which grew forever. So we had to maintain a flat file object dumping to avoid re-parsing of records to form objects! Moreover, we used to do the act of searching for a match for each of the property even when we had no alert criteria configured.

What data loss uh-uh?

Once we fixed the memory issues in receiving data i.e dumping into flat file, we never lost data! In addition to that we had to remove a couple of unwanted indexes in the raw table to avoid the overhead while dumping data. We hadd indexes for columns which could have a maximum of 3 possible values. Which actually made the insert slower and was not useful.

Tuning SQL Queries

Your queries are your keys to performance. Once you start nailing the issues, you will see that you might even have to de-normalize the tables. We did it! Here is some of the key learnings:

* Use "Analyze table" to identify how the mysql query works. This will give you insight about why the query is slow, i.e whether it is using the correct indexes, whether it is using a table level scan etc.

* Never delete rows when you deal with huge data in the order of 50,000 records in a single table. Always try to do a "drop table" as much as possible. If it is not possible, redesign your schema, that is your only way out!

* Avoid unwanted join(s), don't be afraid to de-normalize (i.e duplicate the column values) Avoid join(s) as much as possible, they tend to pull your query down. One hidden advantage is the fact that they impose simplicity in your queries.

* If you are dealing with bulk data, always use "load data infile" there are two options here, local and remote. Use local if the mysql and the application are in the same machine otherwise use remote.

* Try to split your complex queries into two or three simpler queries. The advantages in this approach are that the mysql resource is not hogged up for the entire process. Tend to use temporary tables. Instead of using a single query which spans across 5-6 tables.

* When you deal with huge amount of data, i.e you want to proces say 50,000 records or more in a single query try using limit to batch process the records. This will help you scale the system to new heights

* Always use smaller transaction(s) instead of large ones i.e spanning across "n" tables. This locks up the mysql resources, which might cause slowness of the system even for simple queries

* Use join(s) on columns with indexes or foreign keys

* Ensure that the the queries from the user interface have criteria or limit.

* Also ensure that the criteria column is indexed

* Do not have the numeric value in sql criteria within quotes, because mysql does a type cast

* use temporary tables as much as possible, and drop it...

* Insert of select/delete is a double table lock... be aware...

* Take care that you do not pain the mysql database with the frequency of your updates to the database. We had a typical case we used to dump to the database after every 300 records. So when we started testing for 500/sec we started seeing that the mysql was literally dragging us down. That is when we realized that the typicall at the rate of 500/sec there is an "load data infile" request every second to the mysql database. So we had to change to dump the records after 3 minutes rather than 300 records.

Tuning database schema

When you deal with huge amount of data, always ensure that you partition your data. That is your road to scalability. A single table with say 10 lakhs can never scale. When you intend to execute queries for reports. Always have two levels of tables, raw tables one for the actual data and another set for the report tables( the tables which the user interfaces query on!) Always ensure that the data on your report tables never grows beyond a limit. Incase you are planning to use Oracle, you can try out the partitioning based on criteria. But unfortunately mysql does not support that. So we will have to do that. Maintain a meta table in which you have the header information i.e which table to look for, for a set of given criteria normally time.

* We had to walk through our database schema and we added to add some indexes, delete some and even duplicated column(s) to remove costly join(s).

* Going forward we realized that having the raw tables as InnoDB was actually a overhead to the system, so we changed it to MyISAM

* We also went to the extent of reducing the number of rows in static tables involved in joins

* NULL in database tables seems to cause some performance hit, so avoid them

* Don't have indexes for columns which has allowed values of 2-3

* Cross check the need for each index in your table, they are costly. If the tables are of InnoDB then double check their need. Because InnoDB tables seem to take around 10-15 times the size of the MyISAM tables.

* Use MyISAM whenever there is a majority of , either one of (select or insert) queries. If the insert and select are going to be more then it is better to have it as an InnoDB

Mysql helps us forge ahead!

Tune your mysql server ONLY after you fine tune your queries/schemas and your code. Only then you can see a perceivable improvement in performance. Here are some of the parameters that comes in handy:

* Use the buffer pool size which will enable your queries to execute faster --innodb_buffer_pool_size=64M for InnoDB and use --key-bufer-size=32M for MyISAM

* Even simple queries started taking more time than expected. We were actually puzzled! We realized that mysql seems to load the index of any table it starts inserting on. So what typically happened was, any simple query to a table with 5-10 rows took around 1-2 secs. On further analysis we found that just before the simple query , "load data infile" happened. This disappeared when we changed the raw tables to MyISAM type, because the buffer size for innodb and MyISAM are two different configurations.

for more configurable parameters see here.

Tip: start your mysql to start with the following option --log-error this will enable error logging

Faster...faster Web Client

The user interface is the key to any product, especially the perceived speed of the page is more important! Here is a list of solutions and learnings that might come in handy:

* If your data is not going to change for say 3-5 minutes, it is better to cache your client side pages

* Tend to use Iframe(s)for inner graphs etc. they give a perceived fastness to your pages. Better still use the javascript based content loading mechanism. This is something you might want to do when you have say 3+ graphs in the same page.

* Internet explorer displays the whole page only when all the contents are received from the server. So it is advisable to use iframes or javascript for content loading.

* Never use multiple/duplicate entries of the CSS file in the html page. Internet explorer tends to load each CSS file as a separate entry and applies on the complete page!

Bottomline Your queries and schema make the system slower! Fix them first and then blame the database!

See Also

* High Performance Mysql

* Query Performance

* Explain Query

* Optimizing Queries

* InnoDB Tuning

* Tuning Mysql

Categories: Firewall Analyzer | Performance Tips This page was last modified 18:00, 31 August 2005.

-Ramesh-

family-safe home cleaners Lincolnshire ..
In The News:

Chrome faces its sixth zero-day attack in 2025 as Google patches critical V8 engine flaw CVE-2025-10585 discovered by Threat Analysis Group.
The Hypershell X Ultra exoskeleton features 12 terrain modes and carbon fiber construction to enhance hiking, cycling and outdoor adventures.
Personal data from public records and data brokers helps scammers create tailored scam stories, making their calls and emails more believable to potential victims.
Uber Eats partners with drone delivery startup Flytrex to test autonomous food delivery services in U.S. markets, marking the company's first investment in drone technology.
OpenAI announces 120-day plan to strengthen ChatGPT safeguards for teens, including parental controls and expert council on AI wellbeing.
Spam emails can help improve security — if reported correctly. Learn how to report spam across major email services and get key tips to protect your inbox and personal data.
Harvard researcher Avi Loeb says comet 3I/ATLAS weighs 33 billion tons and spans 3.1 miles, making it far larger than previous interstellar visitors.
The Glassboro Public School District in New Jersey has partnered with ZeroEyes and InformaCast to detect visible firearms and alert staff and law enforcement within seconds.
Hackers use fake Google search results to trick users into downloading lookalike apps laced with malware that have been pushed to the top.
Generative A.I. has lowered barriers for sophisticated cyberattacks as hackers exploit ChatGPT and other tools to forge documents and identities.
IOS 26's new features include bigger lock screen clocks, dirty lens alerts, and improved spam detection for iPhones.
Stay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future.
Sophisticated phishing scams now exploit Apple's iCloud Calendar invite system to bypass spam filters and trick users into calling fake support numbers.
Complete guide to medication tracking apps for iPhone and Android users, featuring built-in Health apps, MediSafe, MyTherapy, and smart pill dispensers.
Alef Aeronautics plans to begin production of its electric flying car Model A by late 2025, following FAA approval for limited testing at five airport locations.
Five common overpayment scams use fake checks and third-party payment requests to steal money from unsuspecting victims across various scenarios.
The Robeta Ananya luxury camper van features a full living room, ceiling bed, washer-dryer, and complete kitchen starting at $295,000 for the limited Founders' Edition.
Scammers exploit retirees' trust and assets through fake debt collection calls, but proper verification methods and reporting can stop these fraudulent schemes.
Social media scams are rising as fraudsters create fake accounts and buy verification badges to deceive users, but these simple safety steps can protect you.
The new groundbreaking Paris Solar-Battery Park in Wisconsin provides renewable energy day and night by capturing excess energy from its solar panels.
Travelers face increased risks from fake Wi-Fi networks at airports and on flights as attackers exploit growing reliance on in-flight internet for entertainment and services.
Older Android phones, tablets and car systems will lose access to new Waze features as the app drops support for Android versions below 10.
Research shows iPhone owners' overconfidence in Apple security makes them easier targets for cybercriminals compared to Android users who take more precautions.
Five key tech terms reshaping online shopping from mobile payments and instant delivery to social media commerce and inventory-free retail models.
AI browsers from Microsoft, OpenAI and Perplexity can fall for scams faster than humans, completing fraudulent purchases and clicking malicious links without verification.

Microsoft Great Plains Customization Recovery & Upgrade for Large Corporation

At the end of XX century, in the late 1990th... Read More

Great Plains Dexterity ? Microsoft Great Plains Customization Overview

Microsoft Business Solutions Great Plains, former Great Plains Software Dynamics... Read More

The Dirt on Screensavers

Remember back in the days where screensavers were the coolest... Read More

Dig Out That Worm

Internet worms. Is your PC infected?If your computer has become... Read More

5 Time-Saving Tips in Microsoft Word

Whether you have used Microsoft Word for years, have just... Read More

Fundraising Software ? How Can That Help Me?

Fundraising software lets you connect with donors in a way... Read More

Microsoft Great Plains: Offshore Customization & Development ? Overview for Consultant

When you visit department stores and see that majority of... Read More

Microsoft CRM Integration With Microsoft Office Documents ? Overview For System Integrator

Microsoft CRM is CRM application, maintained and supported by Microsoft... Read More

Is Your Small Business Ready For A CRM Software Solution?

I have yet to see a business that, sometimes in... Read More

Recovering Microsoft Great Plains Customization ? Tips for IT Director

Remember nice and prosperous Clinton era? When you implemented innovative... Read More

Who Is Minding Your Sensitive Data?

Stealing company information used to be the specialty of spies... Read More

Enterprise Resource Planning Overview

ERP (Enterprise Resource Planning) Overview covers What is ERP, Brief... Read More

The Death of Windows

I have always regretted how Microsoft price gouges and rips... Read More

Instant Messaging is a Sweet Way to Communicate

MSN messenger is a pretty cool invention. I mean I'm... Read More

What You Must Know About Spyware Right Now

Spyware is like the new technological nuclear weapon on the... Read More

Antivirus Software ? Get The Bugs Before They Get You!

You turn on your computer, and it doesn't look quite... Read More

Microsoft CRM Implementation for Large Corporation ? overview

Microsoft Business Solutions CRM is now approaching the phase of... Read More

Create ASP in Minutes

DBxtra goes ASPGetting to the information hidden within corporate databases... Read More

Free Preventive Maintenance Software

While several preventive maintenance software manufacturers offer free trials for... Read More

6 Easy Steps for a Smoother CD/DVD Order

It's all about turn times in the eMedia industry! The... Read More

C++ Tutorial 2, Input and Variables

This is the tutorial where we really get into programming.... Read More

Microsoft CRM ? Consulting in the Post-Recession Time

New post-recession era has new features, which didn't exist in... Read More

Getting Technical Support Help Online

Sometimes your PC will start acting strange for no apparent... Read More

Microsoft Navision Customization and Reporting ? Tips For Programmer/IT Specialist

C/SIDE (Client/Server Integrated Development Environment) - The core of... Read More

What To Do When Windows Wont Boot

When Windows fails to boot it is normally caused by... Read More

after renovation cleaning Bannockburn ..