Planet MySQL

Planet MySQL -
  1. InnoDB Page Compression Explained  We have several customers with multi-terabyte database infrastructure on SSDs, The SSDs are great investment for performance but they are also expensive with shorter lifespans so storage efficiency management is something we are very cautious about on SSDs, At MinervaDB Labs we spend considerable amount of time doing research on InnoDB page compressions benefits and most common mistakes. The compressed tables were first introduced in 2008  with InnoDB plugin for MySQL 5.1 . Facebook has been a major committer to this project and most of it were later implemented in upstream MySQL code as well. We can implement compression in InnoDB is two ways, Either by using Barracuda InnoDB file format or ROW_FORMAT=COMPRESSED How InnoDB page compression works ? When a page is written, It will be compressed applying the specific compression algorithm and written to disk, where the hole punching mechanism ( Several popular Linux file systems already support the hole punching feature. For example: XFS since Linux 2.6.38, ext4 since Linux3.0, tmpfs (/dev/shm  ) since Linux 3.5, and Btrfs since Linux 3.7.)releases empty blocks from the end of the page. If compression fails, data is written out as-is. MySQL implements compression with the help of the well-known zlib library, which implements the LZ77 compression algorithm. Some operating systems implement compression at the file system level. Files are typically divided into fixed-size blocks that are compressed into variable-size blocks, which easily leads into fragmentation.if innodb_page_size=16K and the file system block size is 4K, page data must compress to less than or equal to 12K to make hole punching possible so every time something inside a block is modified, the whole block is recompressed before it is written to disk. These properties make InnoDB compression technique unsuitable for use in an update-intensive database system. InnoDB Page Compression Supported Platforms InnoDB page compression requires sparse file and hole punching support, You can implement page compression on Windows with NTFS and following are the Linux platforms which support hole punching: RHEL 7 and derived distributions that use kernel version 3.10.0-123 or higher OEL 5.10 (UEK2) kernel version 2.6.39 or higher OEL 6.5 (UEK3) kernel version 3.8.13 or higher OEL 7.0 kernel version 3.8.13 or higher SLE11 kernel version 3.0-x SLE12 kernel version 3.12-x OES11 kernel version 3.0-x Ubuntu 14.0.4 LTS kernel version 3.13 or higher Ubuntu 12.0.4 LTS kernel version 3.2 or higher Debian 7 kernel version 3.2 or higher InnoDB compression on Windows The way NTFS clustering is designed we hardly get any benefit with InnoDB compression. The hole punch is done on a “compression unit” and this compression unit is derived from the cluster size (see the table below). This means that by default you cannot punch a hole if the cluster size >= 8K. Here’s a breakdown for smaller cluster sizes: Cluster Size Compression Unit 512 Bytes 8 KB 1 KB 16 KB 2 KB 32 KB 4 KB 64 KB i.e. for page compression to work on Windows, the file system must be created with a cluster size smaller than 4K, and the innodb_page_size must be at least twice the size of the compression unit. For example, for page compression to work on Windows, you could build the file system with a cluster size of 512 Bytes (which has a compression unit of 8KB) and initialize InnoDB with an innodb_page_size value of 16K or greater. How enable InnoDB compression ? You can enabled page compression by specifying the COMPRESSION attribute in the CREATE TABLE statement, We have explained same below with example: CREATE TABLE tab11 (col101 INT) COMPRESSION="zlib"; We can also enable page compression with ALTER TABLE statement. But ALTER TABLE … COMPRESSION updates only the tablespace compression attribute.  Writes to the tablespace that occur after setting the new compression algorithm use the new setting, To enable the new compression algorithm to existing pages, you must rebuild the table using OPTIMIZE TABLE. Disabling InnoDB Page Compression You can disable InnoDB page compression by setting COMPRESSION = None in ALTER TABLE statement. Writes to the tablespace that occur after setting COMPRESSION=None no longer use page compression. To uncompress existing pages, you must rebuild the table using OPTIMIZE TABLE after setting COMPRESSION=None. Monitoring tables using Page Compression You can monitor the tables with page compression enabled from INFORMATION_SCHEMA.TABLES CREATE_OPTIONS columns for tables defined with the COMPRESSION attribute: mysql> SELECT TABLE_NAME, TABLE_SCHEMA, CREATE_OPTIONS FROM INFORMATION_SCHEMA.TABLES WHERE CREATE_OPTIONS LIKE '%COMPRESSION=%'; +------------+--------------+--------------------+ | TABLE_NAME | TABLE_SCHEMA | CREATE_OPTIONS | +------------+--------------+--------------------+ | Ad_Clicks | ADSE | COMPRESSION="zlib" | +------------+--------------+--------------------+ InnoDB Page Compression Limitations Cross-platform dependency issues – We can copy the page-compressed tablespaces between Linux and Windows servers only if the compression algorithm is available on both servers InnoDB page compression doesn’t work always as expected: Page compression is not supported for tables that reside in shared tablespaces, which include the system tablespace, temporary tablespaces, and general tablespaces. Page compression is not supported for undo log tablespaces. Page compression is not supported for redo log pages. During recovery, updated pages are written out in an uncompressed form. R-tree pages, which are used for spatial indexes, are not compressed. Pages that belong to compressed tables (ROW_FORMAT=COMPRESSED) are just left as-is. During recovery, updated pages are written out in an uncompressed form. Fragmentation issues – Since hole punching releasing blocks back to the file system free list, You can expect severe fragmentation in I/O intensive applications References  Percona XtraDB: Compressed Columns with Dictionaries – An Alternative to InnoDB Table Compression by Yura Sorokin –   InnoDB page compression explained in MySQL documentation – InnoDB Page Compression: the Good, the Bad and the Ugly by Vadim Tkachenko – On InnoDB Data Compression in MySQL by Valerii Kravchuk –  InnoDB Transparent Page Compression by Sunny Bains – MyRocks and InnoDB compression explained by Mark Callaghan – InnoDB compressed columns introduced by Yura – The post InnoDB Page Compression – MySQL 8 Compression appeared first on MySQL Consulting, Support and Remote DBA Services.
  2. We have a new version of the MySQL Cluster Auto Installer. I have prepared 3Youtube videos that shows how to make use of the Auto Installer to install andset up a cluster on your local machine.This is my first attempt at making Youtube videos to explain things aroundMySQL Cluster.The Auto Installer is intended as a tool to make it easy to get a cluster up andrunning. It is NOT intended for managing a cluster.The first Youtube video Install MySQL Cluster 8.0.18 on Mac OS X shows howto install MySQL Cluster 8.0.18 on Mac OS X. This is obviously trivial, so shouldbe straightforward to do even without the video.The second Youtube video Starting a local MySQL Cluster using Auto Installer setsup a small cluster with 1 management server, 2 data nodes and 1 MySQL Server andexplains some details around this.Using MySQL Cluster AutoInstaller to start development with NDB goes a bit deeperand also shows how to make use of the cluster and do some trivial operationsthrough a MySQL client and the NDB management client. It also shows how one canextend the possible configurations supported by directly manipulating configurationfiles deployed by the Auto Installer.A little note for Mac OS X users with high resolution is that full 4K resolution isavailable through Google Chrome, not through Safari.
  3. NoSQL + SQL = Mysql 8 Open Source India 2019 keynote from Sanjay Manwani Slides from the keynote presented at Open Source India 2019 at Nimhans convention center Bangalore. As usual lots of interesting folks. Lots of focus on Open Source. Met people from the SODA foundation who are trying to standardize the IO layer across all cloud implementations. All the best guys. Also met folks from MOSIP who have an effort ongoing to help countries create their own UID. Seems like they already have some traction. Also met an interesting person trying to think about Indian design and creativity in software. After Chumbak it does make sense to think about how not only UI but the software development process needs more creativity. Thank you EFY for the opportunity. Great job and all the best for the future.  
  4. Yesterday, Bhuvanesh published an article about how to verify the difference between allocated diskspace for a tablespace and the the data in it. I commented with an old post explaining how to get some similar info only using SQL in case you don’t have filesystem access. And finally, my friend Bill Karwin, commented how this info is not always accurate. Which, of course, I agree with. This is why, I checked what info we have available and try to find some better answer. So first, please remind that information_schema statistics are cached by default: mysql> show global variables like 'information_schema_stats_expiry'; +---------------------------------+-------+ | Variable_name | Value | +---------------------------------+-------+ | information_schema_stats_expiry | 86400 | +---------------------------------+-------+ And that for better results, it’s always advised to run ANALYZE TABLE... For the following examples, I set information_schema_stats_expiry to 0. The New Query The new query takes advantage of the column FILE_SIZE in Performance_Schema.INNODB_TABPLESPACES: > SELECT NAME, TABLE_ROWS, format_bytes(data_length) DATA_SIZE, format_bytes(index_length) INDEX_SIZE, format_bytes(data_length+index_length) TOTAL_SIZE, format_bytes(data_free) DATA_FREE, format_bytes(FILE_SIZE) FILE_SIZE, format_bytes((FILE_SIZE/10 - (data_length/10 + index_length/10))*10) WASTED_SIZE FROM information_schema.TABLES as t JOIN information_schema.INNODB_TABLESPACES as it ON = concat(table_schema,"/",table_name) ORDER BY (data_length + index_length) desc limit 5; +-------------------+------------+------------+------------+------------+------------+------------+-------------+ | NAME | TABLE_ROWS | DATA_SIZE | INDEX_SIZE | TOTAL_SIZE | DATA_FREE | FILE_SIZE | WASTED_SIZE | +-------------------+------------+------------+------------+------------+------------+------------+-------------+ | big/testing | 10241204 | 647.98 MiB | 0 bytes | 647.98 MiB | 2.00 MiB | 660.00 MiB | 12.02 MiB | | docstore/all_recs | 24353 | 17.56 MiB | 0 bytes | 17.56 MiB | 0 bytes | 25.00 MiB | 7.44 MiB | | big/pktest | 111649 | 11.55 MiB | 0 bytes | 11.55 MiB | 0 bytes | 19.00 MiB | 7.45 MiB | | big/pktest_seq | 81880 | 6.52 MiB | 0 bytes | 6.52 MiB | 0 bytes | 14.00 MiB | 7.48 MiB | | library/books | 39 | 384.00 KiB | 16.00 KiB | 400.00 KiB | 0 bytes | 464.00 KiB | 64.00 KiB | +-------------------+------------+------------+------------+------------+------------+------------+-------------+ We can see that MySQL estimates that the datasize for my biggest table is 648MB and that 660MB are used on the disk. The last info is very easy to verify: $ sudo ls -lh /var/lib/mysql/big/testing.ibd -rw-r----- 1 mysql mysql 660M Oct 22 00:19 /var/lib/mysql/big/testing.ibd As I recommended it, it’s always good to do an ANALYZE TABLE: > analyze table big.testing; +-------------+---------+----------+----------+ | Table | Op | Msg_type | Msg_text | +-------------+---------+----------+----------+ | big.testing | analyze | status | OK | +-------------+---------+----------+----------+ And we can run again our query: +-------------------+------------+------------+------------+------------+------------+------------+-------------+ | NAME | TABLE_ROWS | DATA_SIZE | INDEX_SIZE | TOTAL_SIZE | DATA_FREE | FILE_SIZE | WASTED_SIZE | +-------------------+------------+------------+------------+------------+------------+------------+-------------+ | big/testing | 9045529 | 582.42 MiB | 0 bytes | 582.42 MiB | 67.00 MiB | 660.00 MiB | 77.58 MiB | | docstore/all_recs | 24353 | 17.56 MiB | 0 bytes | 17.56 MiB | 0 bytes | 25.00 MiB | 7.44 MiB | | big/pktest | 111649 | 11.55 MiB | 0 bytes | 11.55 MiB | 0 bytes | 19.00 MiB | 7.45 MiB | | big/pktest_seq | 81880 | 6.52 MiB | 0 bytes | 6.52 MiB | 0 bytes | 14.00 MiB | 7.48 MiB | | library/books | 39 | 384.00 KiB | 16.00 KiB | 400.00 KiB | 0 bytes | 464.00 KiB | 64.00 KiB | +-------------------+------------+------------+------------+------------+------------+------------+-------------+ We can see now that the statistics have been updated and that according to my previous post, we are loosing 67MB but with the new one comparing to disk, it seems we are wasting 77.5MB on disk. Let’s see how to table looks like using innodb_ruby: Recovering the disk space Let’s see if we can recover some disk space: > OPTIMIZE table big.testing; +-------------+----------+----------+-------------------------------------------------------------------+ | Table | Op | Msg_type | Msg_text | +-------------+----------+----------+-------------------------------------------------------------------+ | big.testing | optimize | note | Table does not support optimize, doing recreate + analyze instead | | big.testing | optimize | status | OK | +-------------+----------+----------+-------------------------------------------------------------------+ 2 rows in set (1 min 4.8855 sec) And we can check again: +-------------------+------------+------------+------------+------------+------------+------------+-------------+ | NAME | TABLE_ROWS | DATA_SIZE | INDEX_SIZE | TOTAL_SIZE | DATA_FREE | FILE_SIZE | WASTED_SIZE | +-------------------+------------+------------+------------+------------+------------+------------+-------------+ | big/testing | 9045529 | 582.42 MiB | 0 bytes | 582.42 MiB | 67.00 MiB | 584.00 MiB | 1.58 MiB | | docstore/all_recs | 24353 | 17.56 MiB | 0 bytes | 17.56 MiB | 0 bytes | 25.00 MiB | 7.44 MiB | | big/pktest | 111649 | 11.55 MiB | 0 bytes | 11.55 MiB | 0 bytes | 19.00 MiB | 7.45 MiB | | big/pktest_seq | 81880 | 6.52 MiB | 0 bytes | 6.52 MiB | 0 bytes | 14.00 MiB | 7.48 MiB | | library/books | 39 | 384.00 KiB | 16.00 KiB | 400.00 KiB | 0 bytes | 464.00 KiB | 64.00 KiB | +-------------------+------------+------------+------------+------------+------------+------------+-------------+ We can see that now we have regain some disk space ! So even if this is not always 100% accurate, this method provides you already a very close to reality view of how your InnoDB Tablespaces are using the disk and when you will benefit from rebuilding your tablespace. MySQL Shell Plugin I’ve updated the innodb/fragmented MySQL Shell Plugin on my github with a new method:
  5. Dear MySQL users, The MySQL Windows Experience Team is proud to announce the release of MySQL for Visual Studio 1.2.9. This is a maintenance release for 1.2.x. It can be used for production environments. MySQL for Visual Studio is a product that includes all of the Visual Studio integration functionality to create and manage MySQL databases when developing .NET applications. MySQL for Visual Studio is installed using the MySQL Installer for Windows which comes in 2 versions:    * Full (415.1 MB) which includes a complete set of MySQL products      with their binaries included in the downloaded bundle.    * Web (18.6 MB – a network install) which will just pull MySQL      for Visual Studio over the web and install it when run. You can download MySQL Installer from our official Downloads page at MySQL for Visual Studio can also be downloaded by using the product standalone installer found at Changes in MySQL for Visual Studio 1.2.9 (2019-10-21, General Availability)      * Functionality Added or Changed      * Bugs Fixed Functionality Added or Changed      * MySQL for Visual Studio now prompts for action when it        detects any conflicts in its configuration files that        were introduced after MySQL for Visual Studio was        installed. Errors associated with this type of        configuration conflict prevent the creation of data        sources and table adapters.        A new Configuration Update Tool can resolve the conflicts        between Connector/NET and MySQL for Visual Studio        libraries as they are detected (see MySQL for Visual        Studio Configuration Update Tool (        To complete the action, it is necessary to restart        affected versions of Visual Studio manually after        the tool finishes updating the files.        (Bug #29884031, Bug #82617)      * MySQL for Visual Studio now supports all editions of        Microsoft Visual Studio 2019. In addition, this release        removes support for Visual Studio versions 2012 and 2013.        (Bug #29616463, Bug #94937)      * The MySQL Website Configuration tool was renamed and        extended to also automate entry updates to the app.config        file (in addition to the web.config file). The newly        renamed MySQL Application Configuration tool preserves        the functionality used to simplify website development        and now extends the configuration capabilities of the        tool to manage dependencies required when running the        Entity Data Model Wizard. (Bug #29490017)      * Connections to MySQL using SSL PEM encryption or standard        TCP/IP over SSH now are supported by MySQL for Visual        Studio (see Making a Connection ( Bugs Fixed      * Several installation errors caused the wrong version or        edition of Visual Studio to be identified, which then        resulted in the deployment of MySQL for Visual Studio        files to the wrong folder or to a file structure that        represented multiple versions Visual Studio that were        neither selected nor installed on the host computer.        (Bug #30225436, Bug #96576)      * A secondary window opened unexpectedly when a user with        insufficient database privileges (GRANT SELECT and GRAN        SHOW VIEW) attempted to alter the definition of an        existing view. This fix introduces an error message to        explain the condition in detail. (Bug #30001906)      * From code, it was possible to create an instance of        SqlDataSource, make a connection to a MySQL server, and        then populate an ASP.NET control. However, attempting the        same sequence using the Visual Studio Configure Data        Source designer produced a object-reference error when        the data source was being configured in the designer (and        MySQL Connector/NET 6.10.8 was installed) and also at        runtime of the web application.        As part of a related fix, the data source now populates        ASP.NET controls from custom queries as expected at        runtime. Likewise, the designer now permits configuration        when the 6.10.8 version of MySQL Connector/NET is        installed with one caution. When using the Specify        columns from a table or view option, the Configure Data        Source designer generates the query using bracket        characters ( [] ) as delimiters for identifiers, which        produces an invalid MySQL statement. The brackets can be        removed manually after the data source is created.        Alternatively, the second option in the designer        associated with generating queries from the data source,        Specify a custom SQL statement or stored procedure, is        the preferred option to use. (Bug #28148998, Bug #91136)      * The TableAdapter Configuration Wizard in Visual Studio        configured with a valid MySQL connection produced an        error, instead of generating the expected TableAdapter.        Now, the error condition is resolved when MySQL        Connector/NET 6.10.8 (or higher) or MySQL Connector/NET        8.0.14 (or higher) are available for use by MySQL for        Visual Studio. (Bug #27857627, Bug #90390) On Behalf of MySQL Release Engineering Team, Sreedhar S