For some applications, such as public websites, merely having several nodes that run the same PHP scripts is enough to consider your website load balanced. If the application in question pulls in the data it needs from the same database and can generate anything else it needs from that then your application is indeed fully clustered.
The problem, though, comes in when you need to handle user sessions gracefully independent of whichever backend node they’re communicating with. It’s not enough to merely route public traffic to the same backend node (such as some affinity setting with
haproxy. That results in an end state where if a backend server remains, the client’s session data is accessible and the server will know who the user is (for example). The second a node is lost or removed however, the user’s session goes with it. Imagine being in the middle of a complex requisition form only to be logged out and having to start again. Kind of annoying, right?
That’s where session clustering/persistence comes in. If the instances of your PHP application can be configured to share not only application data (via MySQL, Oracle, etc) but also the user’s session information, then your PHP application can then lose arbitrary nodes and direct traffic according to performance needs and not just where it thinks the user’s session happens to live.
- Creating a Simple Load Balanced Application
- PHP Session Handlers
- Examining The Results
- Where to Go From Here
- Further Reading
Creating a Simple Load Balanced Application
Rather than start out elaborate and trying something getting a full CMS like Drupal or WordPress clustered, you should set up just a basic load balance between two PHP-powered webservers running a dummy application. In my example, I have two nginx+PHP-FPM backend servers running the actual application and a frontend nginx webserver load balancing between the two.
This is the dummy application I’m using. It merely prints out basic details such as the backend server name and some session variables along with a way for users to change the variable and see the other backends pick up on the change immediately.
Once we have our two disconnected instances of the same application we can then progress onto connecting them by sharing a common server for session handling.
PHP Session Handlers
By default, PHP uses the
files session handler, which saves all session information in serialized format to flat files on the server. On my Ubuntu server the path to this is
/var/lib/php/sessions. This isn’t the only option though, PHP extensions can register their own session save handlers and administrators can write PHP classes for session handling. The latter enables you to even use MySQL to store your session information.
For the purposes of this article, we’re going to assume you want to put the session information into a central Redis instance. To get a ready made session handler, we need to install the
redis PHP extension via PECL:
root@39509e1fbea3:/# pecl install redis downloading redis-3.1.4.tgz ... Starting to download redis-3.1.4.tgz (199,559 bytes) .........................................done: 199,559 bytes 20 source files, building running: phpize Configuring for: PHP Api Version: 20151012 Zend Module Api No: 20151012 Zend Extension Api No: 320151012 enable igbinary serializer support? [no] : building in /tmp/pear/temp/pear-build-defaultuserQURkRf/redis-3.1.
And then enable it in whatever manner is appropriate for how you’re running PHP (FPM in my case):
echo "extension=redis.so" > /etc/php/7.0/fpm/conf.d/90-redis.ini
After reloading PHP (I used
kill -SIGUSR2 <masterFPMprocess>)If you look at the output of
phpinfo() now you should see two new options underneath “Session” > “Registered save handlers”. Namely:
rediscluster. We’ll be working with the former, which saves session information to a single
redis instance. The latter is useful for specifying multiple hosts/ports for a Redis cluster. To keep this example simple, we’re just going to use a single central Redis server.
Now that we’ve installed the
redis PHP extension and decided what session handler we want to use we need to modify the appropriate
php.ini files to use Redis for session handling. Underneath
[Session] set the following:
session.save_handler = redis session.save_path = "tcp://9220.127.116.11:6379"
save_path to point to your Redis instance.
After reloading PHP-FPM again and accessing our dead simple PHP application we start seeing new keys showing up in our Redis instance:
918.104.22.168:6379> get PHPREDIS_SESSION:lgq2flfnidvho46lfaoi7rfkb0 "visit_count|i:2;variableString|s:11:\"hello redis\";"
Oh hello, that looks like our session data. If you run a
TTL command on this key it should return a value close to 1440 seconds (24 minutes). This value should get reset each time the session key if used. This is so sessions eventually idle out but remain active for however long the user is actually using the site.
Examining The Results
If you continue to reload each backend instance the value for
hostname in the PHP script output should cycle through all the backend servers and the visit count should consistently go up by one but none of the other variables should change.
We’re just scratching the surface here, though. In a real production environment you would have a clustered Redis architecture that your PHP application pulled from and could be dealing with more cluster concerns than just clustering the session data. For instance, if your application takes file uploads that are saved out-of-database for instance, these would need to go to some sort of shared storage (such as iSCSI or NFS) that all backend nodes had access to.
Where to Go From Here
My personal next step would be to start converting actual PHP applications (such as Drupal or WordPress) over to this load balanced setup. Beyond that, you can explore other possibilities with Redis as an alternative caching mechanism.