loading
 

Scaling tips and tricks part two

by @IBaldo


Posted on May, 2021

Intro to part two

Hey!!! Hello again reader!

What?! We didn’t meet before you say? Oh! Head over to part one then.

In the first part we talked about stress testing and metrics, in this second and final part will talk about some scaling bottlenecks and roadblocks, again with tips and tricks to give you ideas and tools to succeed.

So without further ado...

The most usual bottleneck is...

Yep, the database, most precisely, the main ACID one, typically an RDBMS and specifically, since it’s the most widely used one in the free open source software world that we specialize at netlabs: MySQL or MariaDB.

MySQL and MariaDB are both excellent databases with a lot of performance and capacity when properly used.

Let me say that again: when properly used.

Many times, they aren’t used the right way:

  • Are every frequently used queries using the right indexes (not only indexes, but the RIGHT ones)?
  • Is there lock contention?
  • Is the database software correctly configured?

Many times, increasing the server capacity four times, doesn’t give four times the processing capacity and so, it usually comes as a surprise for some customers that they can’t solve their database problem with bigger hardware.

Adding read replicas helps for reads, but sometimes the bottleneck lies on the data modification statements and not even a multi-primary setup helps (and we don’t recommend that setup either, since primary’s transactions coordination, certification, retries, etc. are often slower than just having a single primary server accepting writes).

Databases are a complex thing and expect more articles about this from us, but here are some fast tips anyway…

Start with the queries, not the configuration!

In our experience, most performance is gained from tuning the queries, having better indexes, using the database in a different way, etc.; configuration helps, but do it later.

Queries...

Yes, all systems have too many queries, don’t try to optimize queries randomly but smartly: use the Performance Schema to find the queries that take the most total running time (it's the sum of time taken by each execution of that query in a period of time).

With MySQL Workbench you can easily reset the Performance Schema counters, wait some time and see what those queries are that are used most of the time and some useful data about them like the amount of times they are called, maximum and average execution time, rows scanned, if they use a table scan (don’t use indexes), etc.

Don’t use the slow query log, it’s not the right tool, you only see a fraction of queries there, you may have a query that takes 1 second and it’s called ten times per second but you may have a query that takes 0.1 second and it’s called a hundred times per second: you need to optimize the later and not the former, and it doesn’t show in the slow query log (unless you log all queries, which slows the server down, sometimes dangerously!), it shows in the Performance Schema.

Then look at each query, use EXPLAIN, see the structure and indexes of the tables that it uses, etc.


Is it using an index?

If not, then add one.

Is it using an index but... is it the right one?

Does it have good cardinality / selectivity?

It’s just an index that divides the table in half?


Reading the entire table may be faster than reading an index that has half the rows repeated and you go to the table to read the other columns to find the required ones…

Multi-column indexes are your friend, start with the most cardinality / selective column and add the other ones after that.

This is important: UPDATEs and DELETEs with bad indexes or table scans cause huge lock contention issues for high traffic websites!

And talking about locks… yes… they are a huge scalability road block!

Beware of them, use SHOW ENGINE INNODB STATUS\G or other tools.


Sometimes having foreign keys on development and testing environments helps detect programming errors, but many times they aren’t needed for production environments, avoiding them helps with lock issues and performance.

If you have a high traffic website, don’t use ON DELETE CASCADE and ON UPDATE CASCADE in order to simplify programming, do it by code and avoid the costs of foreign keys on production.

Also remember that updating primary keys is expensive, you shouldn’t be doing that too often or you will kill performance (InnoDB tables are clustered indexes, the entire row is deleted and reinserted and all indexes updated to reflect just a simple primary key modification!).

Configuration

Oh all those beautiful configuration options you may change, all those buffers that you may increase!!!


STOP!!! STOP RIGHT AWAY!!! PUT YOUR HANDS OFF THE KEYBOARD WHERE I CAN SEE THEM!!!


Only change the settings that you understand (this applies not only to MySQL / MariaDB and other databases, but for any software indeed).

ometimes increasing a buffer kills performance: it may cause RAM exhaustion after a certain number of simultaneous connections or too much CPU L2 cache thrashing or switching to a different memory allocation strategy that is slower, etc.

At netlabs we have seen many times, databases that are poorly configured: they would be better off if they were left at their default values!


Yep, sometimes less is better, smaller buffers may help achieve better speed, scalability, etc.

Here follows some easy tips of things to tweak based on our experience:

The good old query cache is an enemy of scalability: it helps when you have few connections but as soon as they increase, it causes big contention issues, disable it without a doubt, don’t think twice, just disable it.


Also the adaptive hash index doesn’t help for 98% of the workloads, just disable it, it causes contention too (yeah we know, the feature sounds good on paper, disable it!).

Put a reasonable amount of maximum connections, use a tool to calculate the memory usage by connection. For example this one can help: https://www.mysqlcalculator.com/

Put a generous amount of innodb_log_file_size, start at 1G and no lower than that.

Use as much as reasonable amount of memory for the InnoDB Buffer Pool (aim for 85% of the server RAM used for MySQL / MariaDB).

Use innodb_flush_method = O_DIRECT to avoid the kernel cache (InnoDB Buffer Pool is smarter and you should be consuming most of your RAM for MySQL / MariaDB so the kernel cache should be too small to be useful anyway, it’s just overhead).

skip_name_resolve = 1 (unless you really want hostnames in privileges and outputs)

Use a big table_open_cache (and remember to give enough file descriptors to the server process), like for example 20k.

If using a NUMA server then using innodb_numa_interleave = 1 is a no brainer.

Some other settings help as well, but those above are the most impactful ones and easier to set.

On the kernel side, use XFS and if using an SSD use the ‘none’ disk scheduler (elevator) or if using an HDD use ‘deadline’ (what? are you using an HDD in 2021 for a high traffic website?).

Don’t use RAID5 or 6 (especially software based ones since they have the write-hole when degraded which loses data if power is lost!) and use a battery for the controller, it’s night and day...

With all this said: now you can lower your hands and use your keyboard again, now that you have read this, your life will be better and the world will be a better place, thanks!!!

There are other bottlenecks and roadblocks

We have seen many things at netlabs, some obscure / voodoo ones too.

But first let’s talk about some other common ones that many stumble on.

Bigger tube but connections get locked?

So you opened up your web server or other service tube (maximum connections) and strange things happen?

Take a look at the file descriptors!!!

If you need to accept a connection you need a file descriptor for it, if you need to open up that javascript or CSS or image file of a cat: you need a file descriptor too, those log files need file descriptors, everything in Unix needs a file descriptor.

See the PIDs of your services and look at /proc/PID/limits to make sure they have enough file descriptors for the amount of connections and things that they must open to respond to those connections.

PHP

Using PHP but not using OpCache?

I will pretend that I didn’t see that (you are lucky), now go and enable it RIGHT NOW, not tomorrow, NOW (yep, many clients not enabling it).

If the website consists of too many PHP files then you will need to increase OpCache’s keys and if the code is big, it’s buffers too (and the interned strings buffer helps reduce RAM consumption too, so actually increasing it so all strings can be interned SAVES memory).

If you are serving static content instead of using a CDN or a caching reverse proxy, don’t use mod_php.

Try to use PHP-FPM if you can, FastCGId if you can’t.

User sessions

Storing users sessions in an ACID database is inefficient, for each write of the session several expensive synchronous writes are done to storage and then some asynchronous ones too.

Better give those resources for more important data and use KeyDB or Redis which store it in RAM and write to disk asynchronously from time to time.

Yes, you can lose some sessions if the server crashes but it would be a small amount of a short period of time (seconds), so no big deal really.

Shared files between servers

Sharing files with NFS… why?

Don’t do that.

Use object storage (S3 alike), a CDN, synchronize files locally, etc.

If you must, then tune the NFS mounts to be less strict about attributes cache coherency if you can (ac* options), not only the NFS server.

Caching DNS

Most infrastructure providers give you DNS servers but make sure that they don’t saturate or throttle you when having high traffic, we have seen several cases causing issues.

In any case, having a caching name server locally in all your servers avoids a lot of network round trips and they are easy to set up.

Final words and advice

Scaling is hard.


If you are an owner, an executive / manager, etc., understand your technicians, you know they are overworked and scaling problems aren’t easy, it takes time, takes research, takes training, learning, experience, etc.; instead ask them how you can help, don’t put more stress on them, if they look tired is because they are tired, if they look scared is because they are scared, though if they look over-confident then brace yourself and prepare for the worst heh, because scalability is something that must be respected and one must be humble to, otherwise it bites.

As an example, one client had strange Apache lockups when having traffic.

To diagnose the problem, after doing the rutinary stuff, we had to install debug packages and do gdb with backtraces voodoo, strace analysis, etc.; good thing we have C experts on board at netlabs which we can ask for help!

It ended up being a bug in the mpm_event Apache module which was not so mature at that time and workers were getting stuck, available slots for connections were running out till there were no slots left available to accept connections.

The problem was present with normal traffic too, but with more traffic it became more apparent.

Switching to the mpm_worker module fixed it quickly and life returned to be beautiful, birds singing and all!


One last advice: if the problem is hard, the stakes are big, don’t ever think that hiring a consulting company for help is a defeat!


Do yourselves a favor and ask for outside help: a consulting firm has already witnessed a lot of different problems and has a diverse group of specialized people to help, it's a good know-how to access and an excellent opportunity to learn from.


Hope that you enjoyed this series and got some new ideas and tricks to succeed!


Leave a Comment: