Planet MySQL

Planet MySQL - https://planet.mysql.com
  1. In my last blog post we have seen how we can easily collect general MySQL status information using the sys.diagnostics() procedure. What about a single session/thread ? Is there built-in way to easily trace a target thread/session activity (ex: transactions/ statements/stage/wait events) like using an oracle SQL trace ? The procedure “ps_trace_thread()” introduced in MySQL 5.7 allow monitoring a specific thread. This procedure is based on Mark Leith work in his blog post A Big Bag of Epic Awesomeness. (I love the title :)) It will  captures as much information on the thread activity as possible depending on the actual configuration and how busy the system is. Ok let’s give it a try : This procedure will monitor Thread with id 114 for 60 second and poll the event tables every 1 second (So it’s not actually tracing because there is a chance that we miss some events).  It generate a “dot” formatted file which will look like : This file can be used to graph the event hierarchy :     That’s it
  2. Laravel 5.8 is recently released with many improvements so we'll be learning, throughout this tutorial how to create an example CRUD application from scratch. The application we'll be building is a simple CRM with a MySQL database. You can see this Upgrade Guide for instructions on how to upgrade an existing web application from Laravel 5.7 to Laravel 5.8 Tutorial Prerequisites This tutorial has some prerequisites You have at least PHP 7.1.3 installed on your development machine, Working experience of PHP. The OpenSSL, PDO, Mbstring, Tokenizer, XML, Ctype, JSON and BCMath PHP extensions installed with your PHP version. The MySQL database management system, Composer (A PHP dependency management for PHP) installed on your machine. You can head to the official website for instructions how to download install it. If you have these prerequisites, let's get started by creating our first Laravel 5.8 project. Creating a Laravel 5.8 Project with PHP Composer Let's use Composer to create a project based on Laravel 5.8. Open a new terminal and run the following command: $ composer create-project --prefer-dist laravel/laravel laravel-crud This command will automatically start installing the latest version of Laravel provided that you have the required dependencies of PHP for Laravel 5.8 installed on your system: After finishing the installation process, navigate to the project's folder: $ cd ./laravel-crud Next, serve your application using the artisan serve command: $ php artisan serve This will start a Laravel development server on the http://127.0.0.1:8000. Just leave it open as any changes we'll be making will automatically get reloaded. Configuring the MySQL Database We'll be using MySQL, the most popular database system used by PHP and Laravel developers so make sure you have created a database for your project. You can simply use the mysql client. Open a new terminal window and run the following command: $ mysql -u root -p You will get prompted for a password. Enter the one you submitted when you configured your MySQL installation and hit Enter. When the mysql clients starts, enter the following SQL instruction to create a database: mysql> create database l58db; Note: You can also use phpMyAdmin to create and work with MySQL databases. phpMyAdmin is a free web interface tool created in PHP, intended to handle the administration of MySQL over the Web. It's beginners friendlier tool that's commonly used by PHP developers. Now, let's let Laravel know about our created database. Open the .env file in the root of your project and update the MySQL credentials with your own values: DB_CONNECTION=mysql DB_HOST=127.0.0.1 DB_PORT=3306 DB_DATABASE=l58db DB_USERNAME=root DB_PASSWORD=<YOUR_DATABASE_PASSWORD> This will allow your application to connect to your MySQL database. You can also configure the database connection in the config/database.php. The database configuration for your application is located at config/database.php. In this file you may define all of your database connections, as well as specify which connection should be used by default. Examples for most of the supported database systems are provided in this file. The official docs Next, let's create the database tables using the following command: $ php artisan migrate Note: Until now, we didn't create any models but Laravel makes use of multiple tables so we need to run the previous command to create them in our database. After we create our own models, we can run the artisan migrate command again to update the database structure. This is the output of the command: Migration table created successfully. Migrating: 2014_10_12_000000_create_users_table Migrated: 2014_10_12_000000_create_users_table Migrating: 2014_10_12_100000_create_password_resets_table Migrated: 2014_10_12_100000_create_password_resets_table You can see that our project has already the users and password_resets tables. The Application We'll Be Building We'll be building a simple CRM application that allows sales managers to manage contacts, accounts, leads, opportunities, tasks and related activities. For the sake of simplicity we'll try to add few interfaces as we can in our application. The main interface is a dashboard which contains the table of contacts and their status (lead, opportunity and customer). We'll not add login and authentication in this tutorial as we'll be the subject of another tutorial. In our CRM database we'll be making use of the following tables: contacts — contains information about contacts/customers such as name, address, company/account, , activities —  contains activities (phone calles, meetings and emails etc.) about the contacts, accounts —  contains information about contact companies, users —  contains information about the application users We'll also be using the following JOIN tables: contact_status — contains contact status such as lead, opportunity or customer which indicates the stage in the sales cycle activity_status — the activity status can be either pending, ongoing or completed, contact_source —  contains contact source. The contacts table has the following fields: id title, first name, last name, email, phone, address, source_id, date of first contact, account_id, status_id, user_id, The contact_status table has the following fields: id, status = (lead, proposal, customer, archived) The contact_source table: id, name The accounts table has the following fields: id, name, description The activities table has the following fields: id, date, description, contact_id status_id The activity_status table has the following fields: id, status Creating Laravel 5.8 Models According to the database structure above, we'll need to create the followng Eloquent models: Contact Account Activity ContactStatus ContactSource ActivityStatus Head back to your terminal and run the following commands: $ php artisan make:model Contact --migration $ php artisan make:model Account --migration $ php artisan make:model Activity --migration $ php artisan make:model ContactStatus --migration $ php artisan make:model ContactSource --migration $ php artisan make:model ActivityStatus --migration This will create models with the corresponding migrations files. The models exist in the app folder and you can find the migration files in the database/migrations folder. The -m flag will also create the corresponding migration file for the model. Next, in your terminal, run the following command to create the base tables: $ php artisan migrate You will get the following output: Migration table created successfully. Migrating: 2019_03_12_223818_create_contacts_table Migrated: 2019_03_12_223818_create_contacts_table Migrating: 2019_03_12_223832_create_accounts_table Migrated: 2019_03_12_223832_create_accounts_table Migrating: 2019_03_12_223841_create_activities_table Migrated: 2019_03_12_223841_create_activities_table Migrating: 2019_03_12_223855_create_contact_statuses_table Migrated: 2019_03_12_223855_create_contact_statuses_table Migrating: 2019_03_12_223904_create_contact_sources_table Migrated: 2019_03_12_223904_create_contact_sources_table Migrating: 2019_03_12_223912_create_activity_statuses_table Migrated: 2019_03_12_223912_create_activity_statuses_table In Laravel, you can specify the structure (table fields) in the migration files. Let's start with the contacts table. Open the database/migrations/2019_03_12_223818_create_contacts_table.php file (the date prefix for the file will be different for you) and add the following changes: public function up() { Schema::create('contacts', function (Blueprint $table) { $table->bigIncrements('id'); $table->timestamps(); $table->string('title'); $table->string('first_name'); $table->string('last_name'); $table->string('email'); $table->string('phone'); $table->string('address'); $table->date('date'); $table->biginteger('user_id')->unsigned(); $table->foreign('user_id')->references('id')->on('users'); }); } Next, open the database/migrations/<YOUR_TIMESTAMP>_create_accounts_table.php file and change accordingly: public function up() { Schema::create('accounts', function (Blueprint $table) { $table->bigIncrements('id'); $table->timestamps(); $table->string('name'); $table->description('description'); }); } Next, open the database/migrations/<YOUR_TIMESTAMP>_create_activities_table.php file and change accordingly: public function up() { Schema::create('activities', function (Blueprint $table) { $table->bigIncrements('id'); $table->timestamps(); $table->string('description'); }); } Next, open the database/migrations/<YOUR_TIMESTAMP>_create_contact_statuses_table.php file and change accordingly: public function up() { Schema::create('contact_statuses', function (Blueprint $table) { $table->bigIncrements('id'); $table->timestamps(); $table->string('status'); }); } Next, open the database/migrations/<YOUR_TIMESTAMP>_create_contact_sources_table.php file and change accordingly: public function up() { Schema::create('contact_sources', function (Blueprint $table) { $table->bigIncrements('id'); $table->timestamps(); $table->string('name'); }); } Next, open the database/migrations/<YOUR_TIMESTAMP>_create_activity_statuses_table.php file and change accordingly: public function up() { Schema::create('activity_statuses', function (Blueprint $table) { $table->bigIncrements('id'); $table->timestamps(); $table->string('status'); }); } You can see that we didn't create any foreign keys between the tables. That's because we need to avoid any issues to creating a foreign key to a table that doesn't exist yet. The order of the migrations is important so you either make sure that the tables that are being referenced are created first or create the tables without any foreign keys and then add a migration to update the tables with the required relationships after the tables are created. Now, let's create the update_contacts_table migration by running the following command: $ php artisan make:migration update_contacts_table --table=contacts Created Migration: 2019_03_12_235456_update_contacts_table Open the database/migrations/<YOUR_TIMESTAMP>_update_contacts_table.php file and update accordingly: public function up() { Schema::table('contacts', function (Blueprint $table) { $table->biginteger('source_id')->unsigned(); $table->foreign('source_id')->references('id')->on('contact_sources'); $table->biginteger('account_id')->unsigned(); $table->foreign('account_id')->references('id')->on('accounts'); $table->biginteger('status_id')->unsigned(); $table->foreign('status_id')->references('id')->on('contact_statuses'); }); } We create three foreign key relationships to the contact_sources, accounts and contact_statuses tables. Next, let's create the update_activities_table migration by running the following command: $ php artisan make:migration update_activities_table --table=activities Created Migration: 2019_03_13_002644_update_activities_table Open the database/migrations/<YOUR_TIMESTAMP>_update_activities_table.php file and update accordingly: public function up() { Schema::table('activities', function (Blueprint $table) { $table->biginteger('contact_id')->unsigned(); $table->foreign('contact_id')->references('id')->on('contacts'); $table->biginteger('status_id')->unsigned(); $table->foreign('status_id')->references('id')->on('activity_statuses'); }); } We create two foreign keys to the contacts and activity_statuses table. Now, run the following command to migrate your database: $ php artisan migrate Implementing the Models The Eloquent ORM included with Laravel provides a beautiful, simple ActiveRecord implementation for working with your database. Each database table has a corresponding "Model" which is used to interact with that table. Models allow you to query for data in your tables, as well as insert new records into the table. The official docs We can interact with our database tables using the corresponding Eloquent models so we need implement the required methods in each model. Defining the Relationships between Models A contact belongs to a source, a status, an account and to a user and has many activities. An account belongs to a user (i.e created by a user) and has many contacts. An activity belongs to a status, a contact and to a user. A contact status has many contacts. A contact source has many contacts. An activity status has many activities Open the app/Account.php file and change accordingly: <?php namespace App; use Illuminate\Database\Eloquent\Model; class Account extends Model { public function contacts(){ return $this->hasMany('App\Contact'); } public function user(){ return $this->belongsTo('App\User'); } } Next, open the app/Activity.php file and change accordingly: <?php namespace App; use Illuminate\Database\Eloquent\Model; class Activity extends Model { public function contact(){ return $this->belongsTo('App\Contact'); } public function status(){ return $this->belongsTo('App\ActivityStatus'); } public function user(){ return $this->belongsTo('App\User'); } } Next, open the app/ActivityStatus.php file and change accordingly: <?php namespace App; use Illuminate\Database\Eloquent\Model; class ActivityStatus extends Model { public function activities(){ return $this->hasMany('App\Activiy'); } } Next, open the app/Contact.php file and update accordingly: <?php namespace App; use Illuminate\Database\Eloquent\Model; class Contact extends Model { protected $fillable = [ 'title', 'first_name', 'last_name', 'email', 'phone', 'address', 'date' ]; public function source(){ return $this->belongsTo('App\ContactSource'); } public function status(){ return $this->belongsTo('App\ContactStatus'); } public function account(){ return $this->belongsTo('App\Account'); } public function user(){ return $this->belongsTo('App\User'); } public function activities(){ return $this->hasMany('App\Contact'); } } Next, open the app/ContactSource.php file and update accordingly: <?php namespace App; use Illuminate\Database\Eloquent\Model; class ContactSource extends Model { public function contacts(){ $this->hasMany('App\Contact'); } } Next, open the app/ContactStatus.php file and update accordingly: <?php namespace App; use Illuminate\Database\Eloquent\Model; class ContactStatus extends Model { // public function contacts(){ $this->hasMany('App\Contact'); } } Finally, open the app/User.php file and update as follows: <?php namespace App; use Illuminate\Notifications\Notifiable; use Illuminate\Contracts\Auth\MustVerifyEmail; use Illuminate\Foundation\Auth\User as Authenticatable; class User extends Authenticatable { use Notifiable; /** * The attributes that are mass assignable. * * @var array */ protected $fillable = [ 'name', 'email', 'password', ]; /** * The attributes that should be hidden for arrays. * * @var array */ protected $hidden = [ 'password', 'remember_token', ]; /** * The attributes that should be cast to native types. * * @var array */ protected $casts = [ 'email_verified_at' => 'datetime', ]; public function contacts(){ $this->hasMany('App\Contact'); } public function activities(){ return $this->hasMany('App\Activiy'); } public function accounts(){ return $this->hasMany('App\Account'); } } Next we'll be creating the REST API controllers.
  3. Has MySQL a built-in way to easily take a look at the overall server status without the need to execute many different custom query or program ? Maybe something similar to an oracle AWR report ? Starting with MySQL 5.7.9 and later we can use the SYS procedure “diagnostics()” for that which rely the information available on the PERFORMANCE_SCHEMA ! Let’s give it a try  by generating a HTML report for the currently running instance (starts an iteration every 30 seconds and runs for at most 60 seconds using the current Performance Schema settings ) : mysql -u root -p -H -e"CALL sys.diagnostics(60, 30, 'current');" > ./current_instance_report.html The report contain information such as : Wait event : Statement execution : I/O : Replication : Metrics : Standard Monitor output :   For historical analysis we can decide to schedule a cron job for this procedure  example  : 0 * * * * mysql -u root -H -e"CALL sys.diagnostics(3600, 1800, 'current');" > /home/blabla/instance_report_$(date +"%Y-%m-%d_%H-%M")   And that was my first MySQL blog post  
  4. Continuent is pleased to announce that Tungsten Clustering 6.0.5 and Tungsten Replicator 6.0.5 are now available! Our v6.0.5 release fixes a number of bugs and introduces some new features, with improvements across the board in a variety of different components. Some of the key improvements include: A new Clustering utility script has been added to the release, tungsten_reset_manager, which assists with the graceful reset of the manager’s dynamic state files on disk. Fixed an issue where the tpm command would allocate inconsistent THL listener ports for the Composite Multimaster (CMM) topology. MySQL ping commands are now reconnected/retried upon “server gone away” error (Proxy mode ONLY). mysql_checker_query script was returning unexpected errors and creating false positives. Changed the script logic to use the timestampdiff function for better accuracy. Some of the highlights common to both products: The tpm command now properly handles network interface names containing colons and/or dots. Instead of searching for a master with appropriate role (i.e. matching the slave preferred role) until timeout is reached, the Replicator will now loop twice before accepting connection to any host, no matter what its role is. The –hosts option was not working with the diag sub-command of the tpm command on nodes installed using the INI method. With INI-method deployments, the new behavior is as follows: The tpm diag command alone will obtain diagnostics from the local host only. The tpm diag –hosts host1,host2,hostN command will obtain diagnostics from the specified host(s) only. The trepctl command now properly handles the -all-services option for the reset sub-command. The backup process fails with 0-byte store*.properties files or store*.properties files with invalid dates. Changed the process so that invalid backup properties files are skipped. The command tpm reverse –ini-format now outputs without the double-dashes and the trailing backslash Fix the ability to enable parallel apply within a Composite Multimaster (CMM) topology. Now handling relay as slave to make the relay use the same code as a slave concerning its internal connections (disable binary logging of its internal SQL queries). Remove any clear-text passwords gathered via tpm diag. Fixes specific to Tungsten Clustering: Change the Manager behavior so as to place the replicator online asynchronously to prevent cctrl from hanging if a slave replicator is put online while the master is offline. Now, if the master is offline the slave will go into the SYNCHRONIZING state. As the master comes online the slaves will come online as well. Do not try to backup a witness server. The MySQL MyISAM check seems to fail intermittently with no way to bypass it so the check has been disabled completely. Fixing the rpm-based post-install chown command so that symlinked directories get correct ownership. The Tungsten Clustering RPM now preserves the original OS group memberships for the tungsten user. Fixes specific to Tungsten Replicator: The postgres applier now respects the database name set by pgsql-dbname. Specifically, the tungsten-replicator/samples/conf/datasources/postgresql.tpl was updated to use the correct variable for the value. Full release notes are available: https://docs.continuent.com/tungsten-clustering-6.0/release-notes-6-0-5.html https://docs.continuent.com/tungsten-replicator-6.0/release-notes-6-0-5.html
  5. ProxySQL is an intelligent and high-performance SQL proxy which supports MySQL, MariaDB and ClickHouse. Recently, ProxySQL 2.0 has become GA and it comes with new exciting features such as GTID consistent reads, frontend SSL, Galera and MySQL Group Replication native support. It is relatively easy to run ProxySQL as Docker container. We have previously written about how to run ProxySQL on Kubernetes as a helper container or as a Kubernetes service, which is based on ProxySQL 1.x. In this blog post, we are going to use the new version ProxySQL 2.x which uses a different approach for Galera Cluster configuration. ProxySQL 2.x Docker Image We have released a new ProxySQL 2.0 Docker image container and it's available in Docker Hub. The README provides a number of configuration examples particularly for Galera and MySQL Replication, pre and post v2.x. The configuration lines can be defined in a text file and mapped into the container's path at /etc/proxysql.cnf to be loaded into ProxySQL service. The image "latest" tag still points to 1.x until ProxySQL 2.0 officially becomes GA (we haven't seen any official release blog/article from ProxySQL team yet). Which means, whenever you install ProxySQL image using latest tag from Severalnines, you will still get version 1.x with it. Take note the new example configurations also enable ProxySQL web stats (introduced in 1.4.4 but still in beta) - a simple dashboard that summarizes the overall configuration and status of ProxySQL itself. ProxySQL 2.x Support for Galera Cluster Let's talk about Galera Cluster native support in greater detail. The new mysql_galera_hostgroups table consists of the following fields: writer_hostgroup: ID of the hostgroup that will contain all the members that are writers (read_only=0). backup_writer_hostgroup: If the cluster is running in multi-writer mode (i.e. there are multiple nodes with read_only=0) and max_writers is set to a smaller number than the total number of nodes, the additional nodes are moved to this backup writer hostgroup. reader_hostgroup: ID of the hostgroup that will contain all the members that are readers (i.e. nodes that have read_only=1) offline_hostgroup: When ProxySQL monitoring determines a host to be OFFLINE, the host will be moved to the offline_hostgroup. active: a boolean value (0 or 1) to activate a hostgroup max_writers: Controls the maximum number of allowable nodes in the writer hostgroup, as mentioned previously, additional nodes will be moved to the backup_writer_hostgroup. writer_is_also_reader: When 1, a node in the writer_hostgroup will also be placed in the reader_hostgroup so that it will be used for reads. When set to 2, the nodes from backup_writer_hostgroup will be placed in the reader_hostgroup, instead of the node(s) in the writer_hostgroup. max_transactions_behind: determines the maximum number of writesets a node in the cluster can have queued before the node is SHUNNED to prevent stale reads (this is determined by querying the wsrep_local_recv_queue Galera variable). comment: Text field that can be used for any purposes defined by the user Here is an example configuration for mysql_galera_hostgroups in table format: Admin> select * from mysql_galera_hostgroups\G *************************** 1. row *************************** writer_hostgroup: 10 backup_writer_hostgroup: 20 reader_hostgroup: 30 offline_hostgroup: 9999 active: 1 max_writers: 1 writer_is_also_reader: 2 max_transactions_behind: 20 comment: ProxySQL performs Galera health checks by monitoring the following MySQL status/variable: read_only - If ON, then ProxySQL will group the defined host into reader_hostgroup unless writer_is_also_reader is 1. wsrep_desync - If ON, ProxySQL will mark the node as unavailable, moving it to offline_hostgroup. wsrep_reject_queries - If this variable is ON, ProxySQL will mark the node as unavailable, moving it to the offline_hostgroup (useful in certain maintenance situations). wsrep_sst_donor_rejects_queries - If this variable is ON, ProxySQL will mark the node as unavailable while the Galera node is serving as an SST donor, moving it to the offline_hostgroup. wsrep_local_state - If this status returns other than 4 (4 means Synced), ProxySQL will mark the node as unavailable and move it into offline_hostgroup. wsrep_local_recv_queue - If this status is higher than max_transactions_behind, the node will be shunned. wsrep_cluster_status - If this status returns other than Primary, ProxySQL will mark the node as unavailable and move it into offline_hostgroup. Having said that, by combining these new parameters in mysql_galera_hostgroups together with mysql_query_rules, ProxySQL 2.x has the flexibility to fit into much more Galera use cases. For example, one can have a single-writer, multi-writer and multi-reader hostgroups defined as the destination hostgroup of a query rule, with the ability to limit the number of writers and finer control on the stale reads behaviour. Contrast this to ProxySQL 1.x, where the user had to explicitly define a scheduler to call an external script to perform the backend health checks and update the database servers state. This requires some customization to the script (user has to update the ProxySQL admin user/password/port) plus it depended on an additional tool (MySQL client) to connect to ProxySQL admin interface. Here is an example configuration of Galera health check script scheduler in table format for ProxySQL 1.x: Admin> select * from scheduler\G *************************** 1. row *************************** id: 1 active: 1 interval_ms: 2000 filename: /usr/share/proxysql/tools/proxysql_galera_checker.sh arg1: 10 arg2: 20 arg3: 1 arg4: 1 arg5: /var/lib/proxysql/proxysql_galera_checker.log comment: Besides, since ProxySQL scheduler thread executes any script independently, there are many versions of health check scripts available out there. All ProxySQL instances deployed by ClusterControl uses the default script provided by the ProxySQL installer package. In ProxySQL 2.x, max_writers and writer_is_also_reader variables can determine how ProxySQL dynamically groups the backend MySQL servers and will directly affect the connection distribution and query routing. For example, consider the following MySQL backend servers: Admin> select hostgroup_id, hostname, status, weight from mysql_servers; +--------------+--------------+--------+--------+ | hostgroup_id | hostname | status | weight | +--------------+--------------+--------+--------+ | 10 | DB1 | ONLINE | 1 | | 10 | DB2 | ONLINE | 1 | | 10 | DB3 | ONLINE | 1 | +--------------+--------------+--------+--------+ Together with the following Galera hostgroups definition: Admin> select * from mysql_galera_hostgroups\G *************************** 1. row *************************** writer_hostgroup: 10 backup_writer_hostgroup: 20 reader_hostgroup: 30 offline_hostgroup: 9999 active: 1 max_writers: 1 writer_is_also_reader: 2 max_transactions_behind: 20 comment: Considering all hosts are up and running, ProxySQL will most likely group the hosts as below: Let's look at them one by one: Configuration Description writer_is_also_reader=0 Groups the hosts into 2 hostgroups (writer and backup_writer). Writer is part of the backup_writer. Since the writer is not a reader, nothing in hostgroup 30 (reader) because none of the hosts are set with read_only=1. It is not a common practice in Galera to enable the read-only flag. writer_is_also_reader=1 Groups the hosts into 3 hostgroups (writer, backup_writer and reader). Variable read_only=0 in Galera has no affect thus writer is also in hostgroup 30 (reader) Writer is not part of backup_writer. writer_is_also_reader=2 Similar with writer_is_also_reader=1 however, writer is part of backup_writer. With this configuration, one can have various choices for hostgroup destination to cater for specific workloads. "Hotspot" writes can be configured to go to only one server to reduce multi-master conflicts, non-conflicting writes can be distributed equally on the other masters, most reads can be distributed evenly on all MySQL servers or non-writers, critical reads can be forwarded to the most up-to-date servers and analytical reads can be forwarded to a slave replica. ProxySQL Deployment for Galera Cluster In this example, suppose we already have a three-node Galera Cluster deployed by ClusterControl as shown in the following diagram: Our Wordpress applications are running on Docker while the Wordpress database is hosted on our Galera Cluster running on bare-metal servers. We decided to run a ProxySQL container alongside our Wordpress containers to have a better control on Wordpress database query routing and fully utilize our database cluster infrastructure. Since the read-write ratio is around 80%-20%, we want to configure ProxySQL to: Forward all writes to one Galera node (less conflict, focus on write) Balance all reads to the other two Galera nodes (better distribution for the majority of the workload) Firstly, create a ProxySQL configuration file inside the Docker host so we can map it into our container: $ mkdir /root/proxysql-docker $ vim /root/proxysql-docker/proxysql.cnf Then, copy the following lines (we will explain the configuration lines further down): datadir="/var/lib/proxysql" admin_variables= { admin_credentials="admin:admin" mysql_ifaces="0.0.0.0:6032" refresh_interval=2000 web_enabled=true web_port=6080 stats_credentials="stats:admin" } mysql_variables= { threads=4 max_connections=2048 default_query_delay=0 default_query_timeout=36000000 have_compress=true poll_timeout=2000 interfaces="0.0.0.0:6033;/tmp/proxysql.sock" default_schema="information_schema" stacksize=1048576 server_version="5.1.30" connect_timeout_server=10000 monitor_history=60000 monitor_connect_interval=200000 monitor_ping_interval=200000 ping_interval_server_msec=10000 ping_timeout_server=200 commands_stats=true sessions_sort=true monitor_username="proxysql" monitor_password="proxysqlpassword" monitor_galera_healthcheck_interval=2000 monitor_galera_healthcheck_timeout=800 } mysql_galera_hostgroups = ( { writer_hostgroup=10 backup_writer_hostgroup=20 reader_hostgroup=30 offline_hostgroup=9999 max_writers=1 writer_is_also_reader=1 max_transactions_behind=30 active=1 } ) mysql_servers = ( { address="db1.cluster.local" , port=3306 , hostgroup=10, max_connections=100 }, { address="db2.cluster.local" , port=3306 , hostgroup=10, max_connections=100 }, { address="db3.cluster.local" , port=3306 , hostgroup=10, max_connections=100 } ) mysql_query_rules = ( { rule_id=100 active=1 match_pattern="^SELECT .* FOR UPDATE" destination_hostgroup=10 apply=1 }, { rule_id=200 active=1 match_pattern="^SELECT .*" destination_hostgroup=20 apply=1 }, { rule_id=300 active=1 match_pattern=".*" destination_hostgroup=10 apply=1 } ) mysql_users = ( { username = "wordpress", password = "passw0rd", default_hostgroup = 10, transaction_persistent = 0, active = 1 }, { username = "sbtest", password = "passw0rd", default_hostgroup = 10, transaction_persistent = 0, active = 1 } ) Now, let's pay a visit to some of the most configuration sections. Firstly, we define the Galera hostgroups configuration as below: mysql_galera_hostgroups = ( { writer_hostgroup=10 backup_writer_hostgroup=20 reader_hostgroup=30 offline_hostgroup=9999 max_writers=1 writer_is_also_reader=1 max_transactions_behind=30 active=1 } ) Hostgroup 10 will be the writer_hostgroup, hostgroup 20 for backup_writer and hostgroup 30 for reader. We set max_writers to 1 so we can have a single-writer hostgroup for hostgroup 10 where all writes should be sent to. Then, we define writer_is_also_reader to 1 which will make all Galera nodes as reader as well, suitable for queries that can be equally distributed to all nodes. Hostgroup 9999 is reserved for offline_hostgroup if ProxySQL detects unoperational Galera nodes. Then, we configure our MySQL servers with default to hostgroup 10: mysql_servers = ( { address="db1.cluster.local" , port=3306 , hostgroup=10, max_connections=100 }, { address="db2.cluster.local" , port=3306 , hostgroup=10, max_connections=100 }, { address="db3.cluster.local" , port=3306 , hostgroup=10, max_connections=100 } ) With the above configurations, ProxySQL will "see" our hostgroups as below: Then, we define the query routing through query rules. Based on our requirement, all reads should be sent to all Galera nodes except the writer (hostgroup 20) and everything else is forwarded to hostgroup 10 for single writer: mysql_query_rules = ( { rule_id=100 active=1 match_pattern="^SELECT .* FOR UPDATE" destination_hostgroup=10 apply=1 }, { rule_id=200 active=1 match_pattern="^SELECT .*" destination_hostgroup=20 apply=1 }, { rule_id=300 active=1 match_pattern=".*" destination_hostgroup=10 apply=1 } ) Finally, we define the MySQL users that will be passed through ProxySQL: mysql_users = ( { username = "wordpress", password = "passw0rd", default_hostgroup = 10, transaction_persistent = 0, active = 1 }, { username = "sbtest", password = "passw0rd", default_hostgroup = 10, transaction_persistent = 0, active = 1 } ) We set transaction_persistent to 0 so all connections coming from these users will respect the query rules for reads and writes routing. Otherwise, the connections would end up hitting one hostgroup which defeats the purpose of load balancing. Do not forget to create those users first on all MySQL servers. For ClusterControl user, you may use Manage -> Schemas and Users feature to create those users. We are now ready to start our container. We are going to map the ProxySQL configuration file as bind mount when starting up the ProxySQL container. Thus, the run command will be: $ docker run -d \ --name proxysql2 \ --hostname proxysql2 \ --publish 6033:6033 \ --publish 6032:6032 \ --publish 6080:6080 \ --restart=unless-stopped \ -v /root/proxysql/proxysql.cnf:/etc/proxysql.cnf \ severalnines/proxysql:2.0 Finally, change the Wordpress database pointing to ProxySQL container port 6033, for instance: $ docker run -d \ --name wordpress \ --publish 80:80 \ --restart=unless-stopped \ -e WORDPRESS_DB_HOST=proxysql2:6033 \ -e WORDPRESS_DB_USER=wordpress \ -e WORDPRESS_DB_HOST=passw0rd \ wordpress At this point, our architecture is looking something like this: If you want ProxySQL container to be persistent, map /var/lib/proxysql/ to a Docker volume or bind mount, for example: $ docker run -d \ --name proxysql2 \ --hostname proxysql2 \ --publish 6033:6033 \ --publish 6032:6032 \ --publish 6080:6080 \ --restart=unless-stopped \ -v /root/proxysql/proxysql.cnf:/etc/proxysql.cnf \ -v proxysql-volume:/var/lib/proxysql \ severalnines/proxysql:2.0 Keep in mind that running with persistent storage like the above will make our /root/proxysql/proxysql.cnf obsolete on the second restart. This is due to ProxySQL multi-layer configuration whereby if /var/lib/proxysql/proxysql.db exists, ProxySQL will skip loading options from configuration file and load whatever is in the SQLite database instead (unless you start proxysql service with --initial flag). Having said that, the next ProxySQL configuration management has to be performed via ProxySQL admin console on port 6032, instead of using configuration file. Monitoring ProxySQL process log by default logging to syslog and you can view them by using standard docker command: $ docker ps $ docker logs proxysql2 To verify the current hostgroup, query the runtime_mysql_servers table: $ docker exec -it proxysql2 mysql -uadmin -padmin -h127.0.0.1 -P6032 --prompt='Admin> ' Admin> select hostgroup_id,hostname,status from runtime_mysql_servers; +--------------+--------------+--------+ | hostgroup_id | hostname | status | +--------------+--------------+--------+ | 10 | 192.168.0.21 | ONLINE | | 30 | 192.168.0.21 | ONLINE | | 30 | 192.168.0.22 | ONLINE | | 30 | 192.168.0.23 | ONLINE | | 20 | 192.168.0.22 | ONLINE | | 20 | 192.168.0.23 | ONLINE | +--------------+--------------+--------+ If the selected writer goes down, it will be transferred to the offline_hostgroup (HID 9999): Admin> select hostgroup_id,hostname,status from runtime_mysql_servers; +--------------+--------------+--------+ | hostgroup_id | hostname | status | +--------------+--------------+--------+ | 10 | 192.168.0.22 | ONLINE | | 9999 | 192.168.0.21 | ONLINE | | 30 | 192.168.0.22 | ONLINE | | 30 | 192.168.0.23 | ONLINE | | 20 | 192.168.0.23 | ONLINE | +--------------+--------------+--------+ The above topology changes can be illustrated in the following diagram: We have also enabled the web stats UI with admin-web_enabled=true.To access the web UI, simply go to the Docker host in port 6080, for example: http://192.168.0.200:8060 and you will be prompted with username/password pop up. Enter the credentials as defined under admin-stats_credentials and you should see the following page: By monitoring MySQL connection pool table, we can get connection distribution overview for all hostgroups: Admin> select hostgroup, srv_host, status, ConnUsed, MaxConnUsed, Queries from stats.stats_mysql_connection_pool order by srv_host; +-----------+--------------+--------+----------+-------------+---------+ | hostgroup | srv_host | status | ConnUsed | MaxConnUsed | Queries | +-----------+--------------+--------+----------+-------------+---------+ | 20 | 192.168.0.23 | ONLINE | 5 | 24 | 11458 | | 30 | 192.168.0.23 | ONLINE | 0 | 0 | 0 | | 20 | 192.168.0.22 | ONLINE | 2 | 24 | 11485 | | 30 | 192.168.0.22 | ONLINE | 0 | 0 | 0 | | 10 | 192.168.0.21 | ONLINE | 32 | 32 | 9746 | | 30 | 192.168.0.21 | ONLINE | 0 | 0 | 0 | +-----------+--------------+--------+----------+-------------+---------+ The output above shows that hostgroup 30 does not process anything because our query rules do not have this hostgroup configured as destination hostgroup. The statistics related to the Galera nodes can be viewed in the mysql_server_galera_log table: Admin> select * from mysql_server_galera_log order by time_start_us desc limit 3\G *************************** 1. row *************************** hostname: 192.168.0.23 port: 3306 time_start_us: 1552992553332489 success_time_us: 2045 primary_partition: YES read_only: NO wsrep_local_recv_queue: 0 wsrep_local_state: 4 wsrep_desync: NO wsrep_reject_queries: NO wsrep_sst_donor_rejects_queries: NO error: NULL *************************** 2. row *************************** hostname: 192.168.0.22 port: 3306 time_start_us: 1552992553329653 success_time_us: 2799 primary_partition: YES read_only: NO wsrep_local_recv_queue: 0 wsrep_local_state: 4 wsrep_desync: NO wsrep_reject_queries: NO wsrep_sst_donor_rejects_queries: NO error: NULL *************************** 3. row *************************** hostname: 192.168.0.21 port: 3306 time_start_us: 1552992553329013 success_time_us: 2715 primary_partition: YES read_only: NO wsrep_local_recv_queue: 0 wsrep_local_state: 4 wsrep_desync: NO wsrep_reject_queries: NO wsrep_sst_donor_rejects_queries: NO error: NULL Related resources  MySQL on Docker: Running ProxySQL as Kubernetes Service  MySQL on Docker: Running ProxySQL as a Helper Container on Kubernetes  Database Load Balancing for MySQL and MariaDB with ProxySQL - Tutorial The resultset returns the related MySQL variable/status state for every Galera node for a particular timestamp. In this configuration, we configured the Galera health check to run every 2 seconds (monitor_galera_healthcheck_interval=2000). Hence, the maximum failover time would be around 2 seconds if a topology change happens to the cluster. References ProxySQL Native Galera Support HA and clustering solution: ProxySQL as an intelligent router for Galera and Group Replication ProxySQL Docker image by Severalnines How to Monitor ProxySQL with Prometheus and ClusterControl Tags:  proxysql docker galera MySQL MariaDB