manual.txt
上传用户:tsgydb
上传日期:2007-04-14
资源大小:10674k
文件大小:1762k
- The rename is done atomically, which means that no other thread can
- access any of the tables while the rename is running. This makes it
- possible to replace a table with an empty one:
- CREATE TABLE new_table (...);
- RENAME TABLE old_table TO backup_table, new_table TO old_table;
- The rename is done from left to right, which means that if you want to
- swap two tables names, you have to:
- RENAME TABLE old_table TO backup_table,
- new_table TO old_table,
- backup_table TO new_table;
- As long as two databases are on the same disk you can also rename from
- one database to another:
- RENAME TABLE current_database.table_name TO other_database.table_name;
- When you execute `RENAME', you can't have any locked tables or active
- transactions. You must also have the `ALTER' and `DROP' privilege on
- the original table and `CREATE' and `INSERT' privilege on the new table.
- If *MySQL* encounters any errors in a multiple table rename, it will do
- a reverse rename for all renamed tables to get everything back to the
- original state.
- `DROP TABLE' Syntax
- ===================
- DROP TABLE [IF EXISTS] tbl_name [, tbl_name,...] [RESTRICT | CASCADE]
- `DROP TABLE' removes one or more tables. All table data and the table
- definition are _removed_, so *be careful* with this command!
- In *MySQL* Version 3.22 or later, you can use the keywords `IF EXISTS'
- to prevent an error from occurring for tables that don't exist.
- `RESTRICT' and `CASCADE' are allowed to make porting easier. For the
- moment they don't do anything.
- *NOTE*: `DROP TABLE' is not transaction safe and will automaticly
- commit any active transactions.
- `OPTIMIZE TABLE' Syntax
- =======================
- OPTIMIZE TABLE tbl_name[,tbl_name]...
- `OPTIMIZE TABLE' should be used if you have deleted a large part of a
- table or if you have made many changes to a table with variable-length
- rows (tables that have `VARCHAR', `BLOB', or `TEXT' columns). Deleted
- records are maintained in a linked list and subsequent `INSERT'
- operations reuse old record positions. You can use `OPTIMIZE TABLE' to
- reclaim the unused space and to defragment the data file.
- For the moment `OPTIMIZE TABLE' only works on *MyISAM* and `BDB'
- tables. For `BDB' tables, `OPTIMIZE TABLE' is currently mapped to
- `ANALYZE TABLE'. *Note ANALYZE TABLE::.
- You can get optimize table to work on other table types by starting
- `mysqld' with `--skip-new' or `--safe-mode', but in this case `OPTIMIZE
- TABLE' is just mapped to `ALTER TABLE'.
- `OPTIMIZE TABLE' works the following way:
- * If the table has deleted or split rows, repair the table.
- * If the index pages are not sorted, sort them.
- * If the statistics are not up to date (and the repair couldn't be
- done by sorting the index), update them.
- `OPTIMIZE TABLE' for `MyISAM' tables is equvialent of running
- `myisamchk --quick --check-changed-tables --sort-index --analyze' on
- the table.
- Note that the table is locked during the time `OPTIMIZE TABLE' is
- running!
- `CHECK TABLE' Syntax
- ====================
- CHECK TABLE tbl_name[,tbl_name...] [option [option...]]
-
- option = QUICK | FAST | MEDIUM | EXTEND | CHANGED
- `CHECK TABLE' only works on `MyISAM' tables. On `MyISAM' tables it's
- the same thing as running `myisamchk -m table_name' on the table.
- If you don't specify any option `MEDIUM' is used.
- Checks the table(s) for errors. For `MyISAM' tables the key statistics
- is updated. The command returns a table with the following columns:
- *Column* *Value*
- Table Table name.
- Op Always "check".
- Msg_type One of `status', `error', `info', or `warning'.
- Msg_text The message.
- Note that you can get many rows of information for each checked table.
- The last row will be of `Msg_type status' and should normally be `OK'.
- If you don't get `OK', or `Not checked' you should normally run a
- repair of the table. *Note Table maintenance::. `Not checked' means
- that the table the given `TYPE' told *MySQL* that there wasn't any need
- to check the table.
- The different check types stand for the following:
- *Type* *Meaning*
- `QUICK' Don't scan the rows to check for wrong links.
- `FAST' Only check tables which haven't been closed properly.
- `CHANGED' Only check tables which have been changed since last
- check or haven't been closed properly.
- `MEDIUM' Scan rows to verify that deleted links are ok. This also
- calculates a key checksum for the rows and verifies this
- with a calcualted checksum for the keys.
- `EXTENDED' Do a full key lookup for all keys for each row. This
- ensures that the table is 100 % consistent, but will take
- a long time!
- For dynamic sized `MyISAM' tables a started check will always do a
- `MEDIUM' check. For static size rows we skip the row scan for `QUICK'
- and `FAST' as the rows are very seldom corrupted.
- You can combine check options as in:
- CHECK TABLE test_table FAST QUICK;
- Which only would do a quick check on the table if it wasn't closed
- properly.
- *NOTE:* that in some case `CHECK TABLE' will change the table! This
- happens if the table is marked as 'corrupted' or 'not closed properly'
- but `CHECK TABLE' didn't find any problems in the table. In this case
- `CHECK TABLE' will mark the table as ok.
- If a table is corrupted, then it's most likely that the problem is in
- the indexes and not in the data part. All of the above check types
- checks the indexes throughly and should thus find most errors.
- If you just want to check a table that you assume is ok, you should use
- no check options or the `QUICK' option. The later should be used when
- you are in a hurry and can take the very small risk that `QUICK' didn't
- find an error in the data file (In most cases *MySQL* should find,
- under normal usage, any error in the data file. If this happens then
- the table will be marked as 'corrupted', in which case the table can't
- be used until it's repaired).
- `FAST' and `CHANGED' are mostly intended to be used from a script (for
- example to be executed from cron) if you want to check your table from
- time to time. In most cases you `FAST' is to be prefered over
- `CHANGED'. (The only case when it isn't is when you suspect a bug you
- have found a bug in the `MyISAM' code.).
- `EXTENDED' is only to be used after you have run a normal check but
- still get strange errors from a table when *MySQL* tries to update a
- row or find a row by key (this is VERY unlikely to happen if a normal
- check has succeeded!).
- `BACKUP TABLE' Syntax
- =====================
- BACKUP TABLE tbl_name[,tbl_name...] TO '/path/to/backup/directory'
- Make a copy of all the table files to the backup directory that are the
- minimum needed to restore it. Currenlty only works for `MyISAM' tables.
- For `MyISAM' table, copies `.frm' (definition) and `.MYD' (data)
- files. The index file can be rebuilt from those two.
- During the backup, read lock will be held for each table, one at time,
- as they are being backed up. If you want to backup several tables as a
- snapshot, you must first issue `LOCK TABLES' obtaining a read lock for
- each table in the group.
- The command returns a table with the following columns:
- *Column* *Value*
- Table Table name
- Op Always "backup"
- Msg_type One of `status', `error', `info' or `warning'.
- Msg_text The message.
- Note that `BACKUP TABLE' is only available in *MySQL* version 3.23.25
- and later.
- `RESTORE TABLE' Syntax
- ======================
- RESTORE TABLE tbl_name[,tbl_name...] FROM '/path/to/backup/directory'
- Restores the table(s) from the backup that was made with `BACKUP
- TABLE'. Existing tables will not be overwritten - if you try to restore
- over an existing table, you will get an error. Restore will take
- longer than BACKUP due to the need to rebuilt the index. The more keys
- you have, the longer it is going to take. Just as `BACKUP TABLE',
- currently only works of `MyISAM' tables.
- The command returns a table with the following columns:
- *Column* *Value*
- Table Table name
- Op Always "restore"
- Msg_type One of `status', `error', `info' or `warning'.
- Msg_text The message.
- `ANALYZE TABLE' Syntax
- ======================
- ANALYZE TABLE tbl_name[,tbl_name...]
- Analyze and store the key distribution for the table. During the
- analyze the table is locked with a read lock. This works on `MyISAM'
- and `BDB' tables.
- This is equivalent to running `myisamchk -a' on the table.
- *MySQL* uses the stored key distribution to decide in which order
- tables should be joined when one does a join on something else than a
- constant.
- The command returns a table with the following columns:
- *Column* *Value*
- Table Table name
- Op Always "analyze"
- Msg_type One of `status', `error', `info' or `warning'.
- Msg_text The message.
- You can check the stored key distribution with the `SHOW INDEX' command.
- *Note SHOW DATABASE INFO::.
- If the table hasn't changed since the last `ANALYZE TABLE' command, the
- table will not be analyzed again.
- `REPAIR TABLE' Syntax
- =====================
- REPAIR TABLE tbl_name[,tbl_name...] [QUICK] [EXTENDED]
- `REPAIR TABLE' only works on `MyISAM' tables and is the same as running
- `myisamchk -r table_name' on the table.
- Repair the corrupted table. The command returns a table with the
- following columns:
- *Column* *Value*
- Table Table name
- Op Always "repair"
- Msg_type One of `status', `error', `info' or `warning'.
- Msg_text The message.
- Note that you can get many rows of information for each repaired table.
- The last one row will be of `Msg_type status' and should normally be
- `OK'. If you don't get `OK', you should try repairing the table with
- `myisamchk -o', as `REPAIR TABLE' does not yet implement all the
- options of `myisamchk'. In the near future, we will make it more
- flexible.
- If `QUICK' is given then *MySQL* will try to do a `REPAIR' of only the
- index tree.
- If you use `EXTENDED' then *MySQL* will create the index row by row
- instead of creating one index at a time with sorting; This may be
- better than sorting on fixed-length keys if you have long `char()' keys
- that compress very good.
- `DELETE' Syntax
- ===============
- DELETE [LOW_PRIORITY] FROM tbl_name
- [WHERE where_definition]
- [LIMIT rows]
- `DELETE' deletes rows from `tbl_name' that satisfy the condition given
- by `where_definition', and returns the number of records deleted.
- If you issue a `DELETE' with no `WHERE' clause, all rows are deleted.
- If you do this in `AUTOCOMMIT' mode, this works as `TRUNCATE'. *Note
- TRUNCATE::. One problem with this is that `DELETE' will return zero as
- the number of affected records, but this will be fixed in 4.0.
- If you really want to know how many records are deleted when you are
- deleting all rows, and are willing to suffer a speed penalty, you can
- use a `DELETE' statement of this form:
- mysql> DELETE FROM tbl_name WHERE 1>0;
- Note that this is MUCH slower than `DELETE FROM tbl_name' with no
- `WHERE' clause, because it deletes rows one at a time.
- If you specify the keyword `LOW_PRIORITY', execution of the `DELETE' is
- delayed until no other clients are reading from the table.
- Deleted records are maintained in a linked list and subsequent `INSERT'
- operations reuse old record positions. To reclaim unused space and
- reduce file sizes, use the `OPTIMIZE TABLE' statement or the `myisamchk'
- utility to reorganize tables. `OPTIMIZE TABLE' is easier, but
- `myisamchk' is faster. See *Note `OPTIMIZE TABLE': OPTIMIZE TABLE and
- *Note Optimization::.
- The *MySQL*-specific `LIMIT rows' option to `DELETE' tells the server
- the maximum number of rows to be deleted before control is returned to
- the client. This can be used to ensure that a specific `DELETE'
- command doesn't take too much time. You can simply repeat the `DELETE'
- command until the number of affected rows is less than the `LIMIT'
- value.
- `TRUNCATE' Syntax
- =================
- TRUNCATE TABLE table_name
- Is in 3.23 and the same thing as `DELETE FROM table_name'. *Note
- DELETE::. The differences are:
- * Implemented as a drop and re-create of the table, which makes this
- much faster when deleting many rows.
- * Not transaction-safe; `TRUNCATE TABLE' will automaticly end the
- current transaction as if `COMMIT' would have been called.
- * Doesn't return the number of deleted rows.
- * As long as the table definition file `table_name.frm' is valid,
- the table can be re-created this way, even if the data or index
- files have become corrupted.
- `SELECT' Syntax
- ===============
- SELECT [STRAIGHT_JOIN] [SQL_SMALL_RESULT] [SQL_BIG_RESULT] [SQL_BUFFER_RESULT]
- [HIGH_PRIORITY]
- [DISTINCT | DISTINCTROW | ALL]
- select_expression,...
- [INTO {OUTFILE | DUMPFILE} 'file_name' export_options]
- [FROM table_references
- [WHERE where_definition]
- [GROUP BY {unsigned_integer | col_name | formula} [ASC | DESC], ...]
- [HAVING where_definition]
- [ORDER BY {unsigned_integer | col_name | formula} [ASC | DESC] ,...]
- [LIMIT [offset,] rows]
- [PROCEDURE procedure_name] ]
- `SELECT' is used to retrieve rows selected from one or more tables.
- `select_expression' indicates the columns you want to retrieve.
- `SELECT' may also be used to retrieve rows computed without reference to
- any table. For example:
- mysql> SELECT 1 + 1;
- -> 2
- All keywords used must be given in exactly the order shown above. For
- example, a `HAVING' clause must come after any `GROUP BY' clause and
- before any `ORDER BY' clause.
- * A `SELECT' expression may be given an alias using `AS'. The alias
- is used as the expression's column name and can be used with
- `ORDER BY' or `HAVING' clauses. For example:
- mysql> select concat(last_name,', ',first_name) AS full_name
- from mytable ORDER BY full_name;
- * The `FROM table_references' clause indicates the tables from which
- to retrieve rows. If you name more than one table, you are
- performing a join. For information on join syntax, see *Note
- `JOIN': JOIN.
- * You can refer to a column as `col_name', `tbl_name.col_name', or
- `db_name.tbl_name.col_name'. You need not specify a `tbl_name' or
- `db_name.tbl_name' prefix for a column reference in a `SELECT'
- statement unless the reference would be ambiguous. See *Note
- Legal names::, for examples of ambiguity that require the more
- explicit column reference forms.
- * A table reference may be aliased using `tbl_name [AS] alias_name':
- mysql> select t1.name, t2.salary from employee AS t1, info AS t2
- where t1.name = t2.name;
- mysql> select t1.name, t2.salary from employee t1, info t2
- where t1.name = t2.name;
- * Columns selected for output may be referred to in `ORDER BY' and
- `GROUP BY' clauses using column names, column aliases, or column
- positions. Column positions begin with 1:
- mysql> select college, region, seed from tournament
- ORDER BY region, seed;
- mysql> select college, region AS r, seed AS s from tournament
- ORDER BY r, s;
- mysql> select college, region, seed from tournament
- ORDER BY 2, 3;
- To sort in reverse order, add the `DESC' (descending) keyword to
- the name of the column in the `ORDER BY' clause that you are
- sorting by. The default is ascending order; this may be specified
- explicitly using the `ASC' keyword.
- * You can in the `WHERE' clause use any of the functions that
- *MySQL* support. *Note Functions::.
- * The `HAVING' clause can refer to any column or alias named in the
- `select_expression'. It is applied last, just before items are
- sent to the client, with no optimization. Don't use `HAVING' for
- items that should be in the `WHERE' clause. For example, do not
- write this:
- mysql> select col_name from tbl_name HAVING col_name > 0;
- Write this instead:
- mysql> select col_name from tbl_name WHERE col_name > 0;
- In *MySQL* Version 3.22.5 or later, you can also write queries
- like this:
- mysql> select user,max(salary) from users
- group by user HAVING max(salary)>10;
- In older *MySQL* versions, you can write this instead:
- mysql> select user,max(salary) AS sum from users
- group by user HAVING sum>10;
- * `SQL_SMALL_RESULT', `SQL_BIG_RESULT', `SQL_BUFFER_RESULT',
- `STRAIGHT_JOIN', and `HIGH_PRIORITY' are *MySQL* extensions to
- ANSI SQL92.
- * `HIGH_PRIORITY' will give the `SELECT' higher priority than a
- statement that updates a table. You should only use this for
- queries that are very fast and must be done at once. A `SELECT
- HIGH_PRIORITY' query will run if the table is locked for read even
- if there is an update statement that is waiting for the table to
- be free.
- * `SQL_BIG_RESULT' can be used with `GROUP BY' or `DISTINCT' to tell
- the optimizer that the result set will have many rows. In this
- case, *MySQL* will directly use disk-based temporary tables if
- needed. *MySQL* will also, in this case, prefer sorting to doing a
- temporary table with a key on the `GROUP BY' elements.
- * If you use `GROUP BY', the output rows will be sorted according to
- the `GROUP BY' as if you would have had an `ORDER BY' over all the
- fields in the `GROUP BY'. *MySQL* has extended the `GROUP BY' so
- that you can also specify `ASC' and `DESC' to `GROUP BY':
- SELECT a,COUNT(b) FROM test_table GROUP BY a DESC
- * *MySQL* has extended the use of `GROUP BY' to allow you to select
- fields which are not mentioned in the `GROUP BY' clause. If you
- are not getting the results you expect from your query, please
- read the `GROUP BY' description. *Note Group by functions::.
- * `SQL_BUFFER_RESULT' will force the result to be put into a
- temporary table. This will help *MySQL* free the table locks early
- and will help in cases where it takes a long time to send the
- result set to the client.
- * `SQL_SMALL_RESULT', a *MySQL*-specific option, can be used with
- `GROUP BY' or `DISTINCT' to tell the optimizer that the result set
- will be small. In this case, *MySQL* will use fast temporary
- tables to store the resulting table instead of using sorting. In
- *MySQL* Version 3.23 this shouldn't normally be needed.
- * `STRAIGHT_JOIN' forces the optimizer to join the tables in the
- order in which they are listed in the `FROM' clause. You can use
- this to speed up a query if the optimizer joins the tables in
- non-optimal order. *Note `EXPLAIN': EXPLAIN.
- * The `LIMIT' clause can be used to constrain the number of rows
- returned by the `SELECT' statement. `LIMIT' takes one or two
- numeric arguments.
- If two arguments are given, the first specifies the offset of the
- first row to return, the second specifies the maximum number of
- rows to return. The offset of the initial row is 0 (not 1):
- mysql> select * from table LIMIT 5,10; # Retrieve rows 6-15
- If one argument is given, it indicates the maximum number of rows
- to return:
- mysql> select * from table LIMIT 5; # Retrieve first 5 rows
- In other words, `LIMIT n' is equivalent to `LIMIT 0,n'.
- * The `SELECT ... INTO OUTFILE 'file_name'' form of `SELECT' writes
- the selected rows to a file. The file is created on the server
- host and cannot already exist (among other things, this prevents
- database tables and files such as `/etc/passwd' from being
- destroyed). You must have the *file* privilege on the server host
- to use this form of `SELECT'.
- `SELECT ... INTO OUTFILE' is mainly intended to let you very
- quickly dump a table on the server machine. If you want to create
- the resulting file on some other host than the server host you
- can't use `SELECT ... INTO OUTFILE'. In this case you should
- instead use some client program like `mysqldump --tab' or `mysql
- -e "SELECT ..." > outfile' to generate the file.
- `SELECT ... INTO OUTFILE' is the complement of `LOAD DATA
- INFILE'; the syntax for the `export_options' part of the statement
- consists of the same `FIELDS' and `LINES' clauses that are used
- with the `LOAD DATA INFILE' statement. *Note `LOAD DATA': LOAD
- DATA.
- In the resulting text file, only the following characters are
- escaped by the `ESCAPED BY' character:
- * The `ESCAPED BY' character
- * The first character in `FIELDS TERMINATED BY'
- * The first character in `LINES TERMINATED BY'
- Additionally, `ASCII 0' is converted to `ESCAPED BY' followed by 0
- (`ASCII 48').
- The reason for the above is that you MUST escape any `FIELDS
- TERMINATED BY', `ESCAPED BY', or `LINES TERMINATED BY' characters
- to reliably be able to read the file back. `ASCII 0' is escaped to
- make it easier to view with some pagers.
- As the resulting file doesn't have to conform to the SQL syntax,
- nothing else need be escaped.
- Here follows an example of getting a file in the format used by
- many old programs.
- SELECT a,b,a+b INTO OUTFILE "/tmp/result.text"
- FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
- LINES TERMINATED BY "n"
- FROM test_table;
- * If you use `INTO DUMPFILE' instead of `INTO OUTFILE', *MySQL* will
- only write one row into the file, without any column or line
- terminations and without any escaping. This is useful if you want
- to store a blob in a file.
- `JOIN' Syntax
- =============
- *MySQL* supports the following `JOIN' syntaxes for use in `SELECT'
- statements:
- table_reference, table_reference
- table_reference [CROSS] JOIN table_reference
- table_reference INNER JOIN table_reference join_condition
- table_reference STRAIGHT_JOIN table_reference
- table_reference LEFT [OUTER] JOIN table_reference join_condition
- table_reference LEFT [OUTER] JOIN table_reference
- table_reference NATURAL [LEFT [OUTER]] JOIN table_reference
- { oj table_reference LEFT OUTER JOIN table_reference ON conditional_expr }
- table_reference RIGHT [OUTER] JOIN table_reference join_condition
- table_reference RIGHT [OUTER] JOIN table_reference
- table_reference NATURAL [RIGHT [OUTER]] JOIN table_reference
- Where `table_reference' is defined as:
- table_name [[AS] alias] [USE INDEX (key_list)] [IGNORE INDEX (key_list)]
- and `join_condition' is defined as:
- ON conditional_expr |
- USING (column_list)
- Note that in versions before Version 3.23.16, the `INNER JOIN' didn't
- take a join condition!
- The last `LEFT OUTER JOIN' syntax shown above exists only for
- compatibility with ODBC:
- * A table reference may be aliased using `tbl_name AS alias_name' or
- `tbl_name alias_name':
- mysql> select t1.name, t2.salary from employee AS t1, info AS t2
- where t1.name = t2.name;
- * `INNER JOIN' and `,' (comma) are semantically equivalent. Both do
- a full join between the tables used. Normally, you specify how
- the tables should be linked in the `WHERE' condition.
- * The `ON' conditional is any conditional of the form that may be
- used in a `WHERE' clause.
- * If there is no matching record for the right table in the `ON' or
- `USING' part in a `LEFT JOIN', a row with all columns set to
- `NULL' is used for the right table. You can use this fact to find
- records in a table that have no counterpart in another table:
- mysql> select table1.* from table1
- LEFT JOIN table2 ON table1.id=table2.id
- where table2.id is NULL;
- This example finds all rows in `table1' with an `id' value that is
- not present in `table2' (that is, all rows in `table1' with no
- corresponding row in `table2'). This assumes that `table2.id' is
- declared `NOT NULL', of course. *Note LEFT JOIN optimization::.
- * The `USING' `(column_list)' clause names a list of columns that
- must exist in both tables. A `USING' clause such as:
- A LEFT JOIN B USING (C1,C2,C3,...)
- is defined to be semantically identical to an `ON' expression like
- this:
- A.C1=B.C1 AND A.C2=B.C2 AND A.C3=B.C3,...
- * The `NATURAL [LEFT] JOIN' of two tables is defined to be
- semantically equivalent to an `INNER JOIN' or a `LEFT JOIN' with a
- `USING' clause that names all columns that exist in both tables.
- * `RIGHT JOIN' works analogously as `LEFT JOIN'. To keep code
- portable across databases, it's recommended to use `LEFT JOIN'
- instead of `RIGHT JOIN'.
- * `STRAIGHT_JOIN' is identical to `JOIN', except that the left table
- is always read before the right table. This can be used for those
- (few) cases where the join optimizer puts the tables in the wrong
- order.
- * As of *MySQL* Version 3.23.12, you can give hints about which
- index *MySQL* should use when retrieving information from a table.
- This is useful if `EXPLAIN' shows that *MySQL* is using the wrong
- index. By specifying `USE INDEX (key_list)', you can tell *MySQL*
- to use only one of the specified indexes to find rows in the
- table. The alternative syntax `IGNORE INDEX (key_list)' can be
- used to tell *MySQL* to not use some particular index.
- Some examples:
- mysql> select * from table1,table2 where table1.id=table2.id;
- mysql> select * from table1 LEFT JOIN table2 ON table1.id=table2.id;
- mysql> select * from table1 LEFT JOIN table2 USING (id);
- mysql> select * from table1 LEFT JOIN table2 ON table1.id=table2.id
- LEFT JOIN table3 ON table2.id=table3.id;
- mysql> select * from table1 USE INDEX (key1,key2) WHERE key1=1 and key2=2 AND
- key3=3;
- mysql> select * from table1 IGNORE INDEX (key3) WHERE key1=1 and key2=2 AND
- key3=3;
- *Note `LEFT JOIN' optimization: LEFT JOIN optimization.
- `INSERT' Syntax
- ===============
- INSERT [LOW_PRIORITY | DELAYED] [IGNORE]
- [INTO] tbl_name [(col_name,...)]
- VALUES (expression,...),(...),...
- or INSERT [LOW_PRIORITY | DELAYED] [IGNORE]
- [INTO] tbl_name [(col_name,...)]
- SELECT ...
- or INSERT [LOW_PRIORITY | DELAYED] [IGNORE]
- [INTO] tbl_name
- SET col_name=expression, col_name=expression, ...
- or INSERT [LOW_PRIORITY] [IGNORE] [INTO] tbl_name
- SELECT ...
- `INSERT' inserts new rows into an existing table. The `INSERT ...
- VALUES' form of the statement inserts rows based on explicitly
- specified values. The `INSERT ... SELECT' form inserts rows selected
- from another table or tables. The `INSERT ... VALUES' form with
- multiple value lists is supported in *MySQL* Version 3.22.5 or later.
- The `col_name=expression' syntax is supported in *MySQL* Version
- 3.22.10 or later.
- `tbl_name' is the table into which rows should be inserted. The column
- name list or the `SET' clause indicates which columns the statement
- specifies values for:
- * If you specify no column list for `INSERT ... VALUES' or `INSERT
- ... SELECT', values for all columns must be provided in the
- `VALUES()' list or by the `SELECT'. If you don't know the order of
- the columns in the table, use `DESCRIBE tbl_name' to find out.
- * Any column not explicitly given a value is set to its default
- value. For example, if you specify a column list that doesn't
- name all the columns in the table, unnamed columns are set to
- their default values. Default value assignment is described in
- *Note `CREATE TABLE': CREATE TABLE.
- * An `expression' may refer to any column that was set earlier in a
- value list. For example, you can say this:
- mysql> INSERT INTO tbl_name (col1,col2) VALUES(15,col1*2);
- But not this:
- mysql> INSERT INTO tbl_name (col1,col2) VALUES(col2*2,15);
- * If you specify the keyword `LOW_PRIORITY', execution of the
- `INSERT' is delayed until no other clients are reading from the
- table. In this case the client has to wait until the insert
- statement is completed, which may take a long time if the table is
- in heavy use. This is in contrast to `INSERT DELAYED', which lets
- the client continue at once. *Note INSERT DELAYED::. Note that
- `LOW_PRIORITY' should normally not be used with `MyISAM' tables as
- this disables concurrent inserts. *Note MyISAM::.
- * If you specify the keyword `IGNORE' in an `INSERT' with many value
- rows, any rows that duplicate an existing `PRIMARY' or `UNIQUE'
- key in the table are ignored and are not inserted. If you do not
- specify `IGNORE', the insert is aborted if there is any row that
- duplicates an existing key value. You can determine with the C
- API function `mysql_info()' how many rows were inserted into the
- table.
- * If *MySQL* was configured using the `DONT_USE_DEFAULT_FIELDS'
- option, `INSERT' statements generate an error unless you explicitly
- specify values for all columns that require a non-`NULL' value.
- *Note `configure' options: configure options.
- * You can find the value used for an `AUTO_INCREMENT' column with
- the `mysql_insert_id' function. *Note `mysql_insert_id()':
- mysql_insert_id.
- If you use `INSERT ... SELECT' or an `INSERT ... VALUES' statement with
- multiple value lists, you can use the C API function `mysql_info()' to
- get information about the query. The format of the information string
- is shown below:
- Records: 100 Duplicates: 0 Warnings: 0
- `Duplicates' indicates the number of rows that couldn't be inserted
- because they would duplicate some existing unique index value.
- `Warnings' indicates the number of attempts to insert column values that
- were problematic in some way. Warnings can occur under any of the
- following conditions:
- * Inserting `NULL' into a column that has been declared `NOT NULL'.
- The column is set to its default value.
- * Setting a numeric column to a value that lies outside the column's
- range. The value is clipped to the appropriate endpoint of the
- range.
- * Setting a numeric column to a value such as `'10.34 a''. The
- trailing garbage is stripped and the remaining numeric part is
- inserted. If the value doesn't make sense as a number at all, the
- column is set to `0'.
- * Inserting a string into a `CHAR', `VARCHAR', `TEXT', or `BLOB'
- column that exceeds the column's maximum length. The value is
- truncated to the column's maximum length.
- * Inserting a value into a date or time column that is illegal for
- the column type. The column is set to the appropriate zero value
- for the type.
- INSERT ... SELECT Syntax
- ------------------------
- INSERT [LOW_PRIORITY] [IGNORE] [INTO] tbl_name [(column list)] SELECT ...
- With `INSERT ... SELECT' statement you can quickly insert many rows
- into a table from one or many tables.
- INSERT INTO tblTemp2 (fldID) SELECT tblTemp1.fldOrder_ID FROM tblTemp1 WHERE
- tblTemp1.fldOrder_ID > 100;
- The following conditions hold for an `INSERT ... SELECT' statement:
- - The query cannot contain an `ORDER BY' clause.
- - The target table of the `INSERT' statement cannot appear in the
- `FROM' clause of the `SELECT' part of the query because it's
- forbidden in ANSI SQL to `SELECT' from the same table into which
- you are inserting. (The problem is that the `SELECT' possibly
- would find records that were inserted earlier during the same run.
- When using sub-select clauses, the situation could easily be very
- confusing!)
- - `AUTO_INCREMENT' columns work as usual.
- - You can use the C API function `mysql_info()' to get information
- about the query. *Note INSERT::.
- You can of course also use `REPLACE' instead of `INSERT' to overwrite
- old rows.
- `INSERT DELAYED' syntax
- -----------------------
- INSERT DELAYED ...
- The `DELAYED' option for the `INSERT' statement is a *MySQL*-specific
- option that is very useful if you have clients that can't wait for the
- `INSERT' to complete. This is a common problem when you use *MySQL*
- for logging and you also periodically run `SELECT' and `UPDATE'
- statements that take a long time to complete. `DELAYED' was introduced
- in *MySQL* Version 3.22.15. It is a *MySQL* extension to ANSI SQL92.
- `INSERT DELAYED' only works with `ISAM' and `MyISAM' tables. Note that
- as `MyISAM' tables supports concurrent `SELECT' and `INSERT', if there
- is no empty blocks in the data file, you very seldom need to use
- `INSERT DELAYED' with `MyISAM'.
- When you use `INSERT DELAYED', the client will get an OK at once and
- the row will be inserted when the table is not in use by any other
- thread.
- Another major benefit of using `INSERT DELAYED' is that inserts from
- many clients are bundled together and written in one block. This is much
- faster than doing many separate inserts.
- Note that currently the queued rows are only stored in memory until
- they are inserted into the table. This means that if you kill `mysqld'
- the hard way (`kill -9') or if `mysqld' dies unexpectedly, any queued
- rows that weren't written to disk are lost!
- The following describes in detail what happens when you use the
- `DELAYED' option to `INSERT' or `REPLACE'. In this description, the
- "thread" is the thread that received an `INSERT DELAYED' command and
- "handler" is the thread that handles all `INSERT DELAYED' statements
- for a particular table.
- * When a thread executes a `DELAYED' statement for a table, a handler
- thread is created to process all `DELAYED' statements for the
- table, if no such handler already exists.
- * The thread checks whether or not the handler has acquired a
- `DELAYED' lock already; if not, it tells the handler thread to do
- so. The `DELAYED' lock can be obtained even if other threads have
- a `READ' or `WRITE' lock on the table. However, the handler will
- wait for all `ALTER TABLE' locks or `FLUSH TABLES' to ensure that
- the table structure is up to date.
- * The thread executes the `INSERT' statement, but instead of writing
- the row to the table, it puts a copy of the final row into a queue
- that is managed by the handler thread. Any syntax errors are
- noticed by the thread and reported to the client program.
- * The client can't report the number of duplicates or the
- `AUTO_INCREMENT' value for the resulting row; it can't obtain them
- from the server, because the `INSERT' returns before the insert
- operation has been completed. If you use the C API, the
- `mysql_info()' function doesn't return anything meaningful, for
- the same reason.
- * The update log is updated by the handler thread when the row is
- inserted into the table. In case of multiple-row inserts, the
- update log is updated when the first row is inserted.
- * After every `delayed_insert_limit' rows are written, the handler
- checks whether or not any `SELECT' statements are still pending.
- If so, it allows these to execute before continuing.
- * When the handler has no more rows in its queue, the table is
- unlocked. If no new `INSERT DELAYED' commands are received within
- `delayed_insert_timeout' seconds, the handler terminates.
- * If more than `delayed_queue_size' rows are pending already in a
- specific handler queue, the thread requesting `INSERT DELAYED'
- waits until there is room in the queue. This is done to ensure
- that the `mysqld' server doesn't use all memory for the delayed
- memory queue.
- * The handler thread will show up in the *MySQL* process list with
- `delayed_insert' in the `Command' column. It will be killed if
- you execute a `FLUSH TABLES' command or kill it with `KILL
- thread_id'. However, it will first store all queued rows into the
- table before exiting. During this time it will not accept any new
- `INSERT' commands from another thread. If you execute an `INSERT
- DELAYED' command after this, a new handler thread will be created.
- * Note that the above means that `INSERT DELAYED' commands have
- higher priority than normal `INSERT' commands if there is an
- `INSERT DELAYED' handler already running! Other update commands
- will have to wait until the `INSERT DELAYED' queue is empty,
- someone kills the handler thread (with `KILL thread_id'), or
- someone executes `FLUSH TABLES'.
- * The following status variables provide information about `INSERT
- DELAYED' commands:
- *Variable* *Meaning*
- `Delayed_insert_threads'Number of handler threads
- `Delayed_writes' Number of rows written with `INSERT DELAYED'
- `Not_flushed_delayed_rows'Number of rows waiting to be written
- You can view these variables by issuing a `SHOW STATUS' statement
- or by executing a `mysqladmin extended-status' command.
- Note that `INSERT DELAYED' is slower than a normal INSERT if the table
- is not in use. There is also the additional overhead for the server to
- handle a separate thread for each table on which you use `INSERT
- DELAYED'. This means that you should only use `INSERT DELAYED' when
- you are really sure you need it!
- `REPLACE' Syntax
- ================
- REPLACE [LOW_PRIORITY | DELAYED]
- [INTO] tbl_name [(col_name,...)]
- VALUES (expression,...)
- or REPLACE [LOW_PRIORITY | DELAYED]
- [INTO] tbl_name [(col_name,...)]
- SELECT ...
- or REPLACE [LOW_PRIORITY | DELAYED]
- [INTO] tbl_name
- SET col_name=expression, col_name=expression,...
- `REPLACE' works exactly like `INSERT', except that if an old record in
- the table has the same value as a new record on a unique index, the old
- record is deleted before the new record is inserted. *Note `INSERT':
- INSERT.
- In other words, you can't access the values of the old row from a
- `REPLACE' statement. In some old *MySQL* version it looked like you
- could do this, but that was a bug that has been corrected.
- `LOAD DATA INFILE' Syntax
- =========================
- LOAD DATA [LOW_PRIORITY] [LOCAL] INFILE 'file_name.txt' [REPLACE | IGNORE]
- INTO TABLE tbl_name
- [FIELDS
- [TERMINATED BY 't']
- [[OPTIONALLY] ENCLOSED BY '']
- [ESCAPED BY '\' ]
- ]
- [LINES TERMINATED BY 'n']
- [IGNORE number LINES]
- [(col_name,...)]
- The `LOAD DATA INFILE' statement reads rows from a text file into a
- table at a very high speed. If the `LOCAL' keyword is specified, the
- file is read from the client host. If `LOCAL' is not specified, the
- file must be located on the server. (`LOCAL' is available in *MySQL*
- Version 3.22.6 or later.)
- For security reasons, when reading text files located on the server, the
- files must either reside in the database directory or be readable by
- all. Also, to use `LOAD DATA INFILE' on server files, you must have the
- *file* privilege on the server host. *Note Privileges provided::.
- If you specify the keyword `LOW_PRIORITY', execution of the `LOAD DATA'
- statement is delayed until no other clients are reading from the table.
- Using `LOCAL' will be a bit slower than letting the server access the
- files directly, because the contents of the file must travel from the
- client host to the server host. On the other hand, you do not need the
- *file* privilege to load local files.
- If you are using *MySQL* before Version 3.23.24 you can't read from a
- FIFO with `LOAD DATA INFILE'. If you need to read from a FIFO (for
- example the output from gunzip), use `LOAD DATA LOCAL INFILE' instead.
- You can also load data files by using the `mysqlimport' utility; it
- operates by sending a `LOAD DATA INFILE' command to the server. The
- `--local' option causes `mysqlimport' to read data files from the
- client host. You can specify the `--compress' option to get better
- performance over slow networks if the client and server support the
- compressed protocol.
- When locating files on the server host, the server uses the following
- rules:
- * If an absolute pathname is given, the server uses the pathname as
- is.
- * If a relative pathname with one or more leading components is
- given, the server searches for the file relative to the server's
- data directory.
- * If a filename with no leading components is given, the server
- looks for the file in the database directory of the current
- database.
- Note that these rules mean a file given as `./myfile.txt' is read from
- the server's data directory, whereas a file given as `myfile.txt' is
- read from the database directory of the current database. For example,
- the following `LOAD DATA' statement reads the file `data.txt' from the
- database directory for `db1' because `db1' is the current database,
- even though the statement explicitly loads the file into a table in the
- `db2' database:
- mysql> USE db1;
- mysql> LOAD DATA INFILE "data.txt" INTO TABLE db2.my_table;
- The `REPLACE' and `IGNORE' keywords control handling of input records
- that duplicate existing records on unique key values. If you specify
- `REPLACE', new rows replace existing rows that have the same unique key
- value. If you specify `IGNORE', input rows that duplicate an existing
- row on a unique key value are skipped. If you don't specify either
- option, an error occurs when a duplicate key value is found, and the
- rest of the text file is ignored.
- If you load data from a local file using the `LOCAL' keyword, the server
- has no way to stop transmission of the file in the middle of the
- operation, so the default bahavior is the same as if `IGNORE' is
- specified.
- If you use `LOAD DATA INFILE' on an empty `MyISAM' table, all
- non-unique indexes are created in a separate batch (like in `REPAIR').
- This normally makes `LOAD DATA INFILE' much faster when you have many
- indexes.
- `LOAD DATA INFILE' is the complement of `SELECT ... INTO OUTFILE'.
- *Note `SELECT': SELECT. To write data from a database to a file, use
- `SELECT ... INTO OUTFILE'. To read the file back into the database,
- use `LOAD DATA INFILE'. The syntax of the `FIELDS' and `LINES' clauses
- is the same for both commands. Both clauses are optional, but `FIELDS'
- must precede `LINES' if both are specified.
- If you specify a `FIELDS' clause, each of its subclauses (`TERMINATED
- BY', `[OPTIONALLY] ENCLOSED BY', and `ESCAPED BY') is also optional,
- except that you must specify at least one of them.
- If you don't specify a `FIELDS' clause, the defaults are the same as if
- you had written this:
- FIELDS TERMINATED BY 't' ENCLOSED BY '' ESCAPED BY '\'
- If you don't specify a `LINES' clause, the default is the same as if
- you had written this:
- LINES TERMINATED BY 'n'
- In other words, the defaults cause `LOAD DATA INFILE' to act as follows
- when reading input:
- * Look for line boundaries at newlines.
- * Break lines into fields at tabs.
- * Do not expect fields to be enclosed within any quoting characters.
- * Interpret occurrences of tab, newline, or `' preceded by `' as
- literal characters that are part of field values.
- Conversely, the defaults cause `SELECT ... INTO OUTFILE' to act as
- follows when writing output:
- * Write tabs between fields.
- * Do not enclose fields within any quoting characters.
- * Use `' to escape instances of tab, newline or `' that occur
- within field values.
- * Write newlines at the ends of lines.
- Note that to write `FIELDS ESCAPED BY '\'', you must specify two
- backslashes for the value to be read as a single backslash.
- The `IGNORE number LINES' option can be used to ignore a header of
- column names at the start of the file:
- mysql> LOAD DATA INFILE "/tmp/file_name" into table test IGNORE 1 LINES;
- When you use `SELECT ... INTO OUTFILE' in tandem with `LOAD DATA
- INFILE' to write data from a database into a file and then read the
- file back into the database later, the field and line handling options
- for both commands must match. Otherwise, `LOAD DATA INFILE' will not
- interpret the contents of the file properly. Suppose you use `SELECT
- ... INTO OUTFILE' to write a file with fields delimited by commas:
- mysql> SELECT * INTO OUTFILE 'data.txt'
- FIELDS TERMINATED BY ','
- FROM ...
- To read the comma-delimited file back in, the correct statement would
- be:
- mysql> LOAD DATA INFILE 'data.txt' INTO TABLE table2
- FIELDS TERMINATED BY ',';
- If instead you tried to read in the file with the statement shown
- below, it wouldn't work because it instructs `LOAD DATA INFILE' to look
- for tabs between fields:
- mysql> LOAD DATA INFILE 'data.txt' INTO TABLE table2
- FIELDS TERMINATED BY 't';
- The likely result is that each input line would be interpreted as a
- single field.
- `LOAD DATA INFILE' can be used to read files obtained from external
- sources, too. For example, a file in dBASE format will have fields
- separated by commas and enclosed in double quotes. If lines in the
- file are terminated by newlines, the command shown below illustrates
- the field and line handling options you would use to load the file:
- mysql> LOAD DATA INFILE 'data.txt' INTO TABLE tbl_name
- FIELDS TERMINATED BY ',' ENCLOSED BY '"'
- LINES TERMINATED BY 'n';
- Any of the field or line handling options may specify an empty string
- (`'''). If not empty, the `FIELDS [OPTIONALLY] ENCLOSED BY' and
- `FIELDS ESCAPED BY' values must be a single character. The `FIELDS
- TERMINATED BY' and `LINES TERMINATED BY' values may be more than one
- character. For example, to write lines that are terminated by carriage
- return-linefeed pairs, or to read a file containing such lines, specify
- a `LINES TERMINATED BY 'rn'' clause.
- For example, to read a file of jokes, that are separated with a line of
- `%%', into a SQL table you can do:
- create table jokes (a int not null auto_increment primary key, joke text
- not null);
- load data infile "/tmp/jokes.txt" into table jokes fields terminated by ""
- lines terminated by "n%%n" (joke);
- `FIELDS [OPTIONALLY] ENCLOSED BY' controls quoting of fields. For
- output (`SELECT ... INTO OUTFILE'), if you omit the word `OPTIONALLY',
- all fields are enclosed by the `ENCLOSED BY' character. An example of
- such output (using a comma as the field delimiter) is shown below:
- "1","a string","100.20"
- "2","a string containing a , comma","102.20"
- "3","a string containing a " quote","102.20"
- "4","a string containing a ", quote and comma","102.20"
- If you specify `OPTIONALLY', the `ENCLOSED BY' character is used only
- to enclose `CHAR' and `VARCHAR' fields:
- 1,"a string",100.20
- 2,"a string containing a , comma",102.20
- 3,"a string containing a " quote",102.20
- 4,"a string containing a ", quote and comma",102.20
- Note that occurrences of the `ENCLOSED BY' character within a field
- value are escaped by prefixing them with the `ESCAPED BY' character.
- Also note that if you specify an empty `ESCAPED BY' value, it is
- possible to generate output that cannot be read properly by `LOAD DATA
- INFILE'. For example, the output just shown above would appear as shown
- below if the escape character is empty. Observe that the second field
- in the fourth line contains a comma following the quote, which
- (erroneously) appears to terminate the field:
- 1,"a string",100.20
- 2,"a string containing a , comma",102.20
- 3,"a string containing a " quote",102.20
- 4,"a string containing a ", quote and comma",102.20
- For input, the `ENCLOSED BY' character, if present, is stripped from the
- ends of field values. (This is true whether or not `OPTIONALLY' is
- specified; `OPTIONALLY' has no effect on input interpretation.)
- Occurrences of the `ENCLOSED BY' character preceded by the `ESCAPED BY'
- character are interpreted as part of the current field value. In
- addition, duplicated `ENCLOSED BY' characters occurring within fields
- are interpreted as single `ENCLOSED BY' characters if the field itself
- starts with that character. For example, if `ENCLOSED BY '"'' is
- specified, quotes are handled as shown below:
- "The ""BIG"" boss" -> The "BIG" boss
- The "BIG" boss -> The "BIG" boss
- The ""BIG"" boss -> The ""BIG"" boss
- `FIELDS ESCAPED BY' controls how to write or read special characters.
- If the `FIELDS ESCAPED BY' character is not empty, it is used to prefix
- the following characters on output:
- * The `FIELDS ESCAPED BY' character
- * The `FIELDS [OPTIONALLY] ENCLOSED BY' character
- * The first character of the `FIELDS TERMINATED BY' and `LINES
- TERMINATED BY' values
- * ASCII `0' (what is actually written following the escape character
- is ASCII `'0'', not a zero-valued byte)
- If the `FIELDS ESCAPED BY' character is empty, no characters are
- escaped. It is probably not a good idea to specify an empty escape
- character, particularly if field values in your data contain any of the
- characters in the list just given.
- For input, if the `FIELDS ESCAPED BY' character is not empty,
- occurrences of that character are stripped and the following character
- is taken literally as part of a field value. The exceptions are an
- escaped `0' or `N' (for example, ` ' or `N' if the escape character is
- `'). These sequences are interpreted as ASCII `0' (a zero-valued
- byte) and `NULL'. See below for the rules on `NULL' handling.
- For more information about `'-escape syntax, see *Note Literals::.
- In certain cases, field and line handling options interact:
- * If `LINES TERMINATED BY' is an empty string and `FIELDS TERMINATED
- BY' is non-empty, lines are also terminated with `FIELDS
- TERMINATED BY'.
- * If the `FIELDS TERMINATED BY' and `FIELDS ENCLOSED BY' values are
- both empty (`'''), a fixed-row (non-delimited) format is used.
- With fixed-row format, no delimiters are used between fields.
- Instead, column values are written and read using the "display"
- widths of the columns. For example, if a column is declared as
- `INT(7)', values for the column are written using 7-character
- fields. On input, values for the column are obtained by reading 7
- characters. Fixed-row format also affects handling of `NULL'
- values; see below. Note that fixed-size format will not work if
- you are using a multi-byte character set.
- Handling of `NULL' values varies, depending on the `FIELDS' and `LINES'
- options you use:
- * For the default `FIELDS' and `LINES' values, `NULL' is written as
- `N' for output and `N' is read as `NULL' for input (assuming the
- `ESCAPED BY' character is `').
- * If `FIELDS ENCLOSED BY' is not empty, a field containing the
- literal word `NULL' as its value is read as a `NULL' value (this
- differs from the word `NULL' enclosed within `FIELDS ENCLOSED BY'
- characters, which is read as the string `'NULL'').
- * If `FIELDS ESCAPED BY' is empty, `NULL' is written as the word
- `NULL'.
- * With fixed-row format (which happens when `FIELDS TERMINATED BY'
- and `FIELDS ENCLOSED BY' are both empty), `NULL' is written as an
- empty string. Note that this causes both `NULL' values and empty
- strings in the table to be indistinguishable when written to the
- file because they are both written as empty strings. If you need
- to be able to tell the two apart when reading the file back in,
- you should not use fixed-row format.
- Some cases are not supported by `LOAD DATA INFILE':
- * Fixed-size rows (`FIELDS TERMINATED BY' and `FIELDS ENCLOSED BY'
- both empty) and `BLOB' or `TEXT' columns.
- * If you specify one separator that is the same as or a prefix of
- another, `LOAD DATA INFILE' won't be able to interpret the input
- properly. For example, the following `FIELDS' clause would cause
- problems:
- FIELDS TERMINATED BY '"' ENCLOSED BY '"'
- * If `FIELDS ESCAPED BY' is empty, a field value that contains an
- occurrence of `FIELDS ENCLOSED BY' or `LINES TERMINATED BY'
- followed by the `FIELDS TERMINATED BY' value will cause `LOAD DATA
- INFILE' to stop reading a field or line too early. This happens
- because `LOAD DATA INFILE' cannot properly determine where the
- field or line value ends.
- The following example loads all columns of the `persondata' table:
- mysql> LOAD DATA INFILE 'persondata.txt' INTO TABLE persondata;
- No field list is specified, so `LOAD DATA INFILE' expects input rows to
- contain a field for each table column. The default `FIELDS' and
- `LINES' values are used.
- If you wish to load only some of a table's columns, specify a field
- list:
- mysql> LOAD DATA INFILE 'persondata.txt'
- INTO TABLE persondata (col1,col2,...);
- You must also specify a field list if the order of the fields in the
- input file differs from the order of the columns in the table.
- Otherwise, *MySQL* cannot tell how to match up input fields with table
- columns.
- If a row has too few fields, the columns for which no input field is
- present are set to default values. Default value assignment is
- described in *Note `CREATE TABLE': CREATE TABLE.
- An empty field value is interpreted differently than if the field value
- is missing:
- * For string types, the column is set to the empty string.
- * For numeric types, the column is set to `0'.
- * For date and time types, the column is set to the appropriate
- "zero" value for the type. *Note Date and time types::.
- `TIMESTAMP' columns are only set to the current date and time if there
- is a `NULL' value for the column, or (for the first `TIMESTAMP' column
- only) if the `TIMESTAMP' column is left out from the field list when a
- field list is specified.
- If an input row has too many fields, the extra fields are ignored and
- the number of warnings is incremented.
- `LOAD DATA INFILE' regards all input as strings, so you can't use
- numeric values for `ENUM' or `SET' columns the way you can with
- `INSERT' statements. All `ENUM' and `SET' values must be specified as
- strings!
- If you are using the C API, you can get information about the query by
- calling the API function `mysql_info()' when the `LOAD DATA INFILE'
- query finishes. The format of the information string is shown below:
- Records: 1 Deleted: 0 Skipped: 0 Warnings: 0
- Warnings occur under the same circumstances as when values are inserted
- via the `INSERT' statement (*note `INSERT': INSERT.), except that `LOAD
- DATA INFILE' also generates warnings when there are too few or too many
- fields in the input row. The warnings are not stored anywhere; the
- number of warnings can only be used as an indication if everything went
- well. If you get warnings and want to know exactly why you got them,
- one way to do this is to use `SELECT ... INTO OUTFILE' into another
- file and compare this to your original input file.
- If you need `LOAD DATA' to read from a pipe, you can use the following
- trick:
- mkfifo /mysql/db/x/x
- chmod 666 /mysql/db/x/x
- cat < /dev/tcp/10.1.1.12/4711 > /nt/mysql/db/x/x
- mysql -e "LOAD DATA INFILE 'x' INTO TABLE x" x
- If you are using a version of *MySQL* older than 3.23.25 you can only
- do the above with `LOAD DATA LOCAL INFILE'.
- For more information about the efficiency of `INSERT' versus `LOAD DATA
- INFILE' and speeding up `LOAD DATA INFILE', *Note Insert speed::.
- `UPDATE' Syntax
- ===============
- UPDATE [LOW_PRIORITY] [IGNORE] tbl_name
- SET col_name1=expr1, [col_name2=expr2, ...]
- [WHERE where_definition]
- [ORDER BY ...]
- [LIMIT #]
- `UPDATE' updates columns in existing table rows with new values. The
- `SET' clause indicates which columns to modify and the values they
- should be given. The `WHERE' clause, if given, specifies which rows
- should be updated. Otherwise all rows are updated. If the `ORDER BY'
- clause is specified, the rows will be updated in the order that is
- specified.
- If you specify the keyword `LOW_PRIORITY', execution of the `UPDATE' is
- delayed until no other clients are reading from the table.
- If you specify the keyword `IGNORE', the update statement will not
- abort even if we get duplicate key errors during the update. Rows that
- would cause conflicts will not be updated.
- If you access a column from `tbl_name' in an expression, `UPDATE' uses
- the current value of the column. For example, the following statement
- sets the `age' column to one more than its current value:
- mysql> UPDATE persondata SET age=age+1;
- `UPDATE' assignments are evaluated from left to right. For example, the
- following statement doubles the `age' column, then increments it:
- mysql> UPDATE persondata SET age=age*2, age=age+1;
- If you set a column to the value it currently has, *MySQL* notices this
- and doesn't update it.
- `UPDATE' returns the number of rows that were actually changed. In
- *MySQL* Version 3.22 or later, the C API function `mysql_info()'
- returns the number of rows that were matched and updated and the number
- of warnings that occurred during the `UPDATE'.
- In *MySQL* Version 3.23, you can use `LIMIT #' to ensure that only a
- given number of rows are changed.
- `USE' Syntax
- ============
- USE db_name
- The `USE db_name' statement tells *MySQL* to use the `db_name' database
- as the default database for subsequent queries. The database remains
- current until the end of the session or until another `USE' statement
- is issued:
- mysql> USE db1;
- mysql> SELECT count(*) FROM mytable; # selects from db1.mytable
- mysql> USE db2;
- mysql> SELECT count(*) FROM mytable; # selects from db2.mytable
- Making a particular database current by means of the `USE' statement
- does not preclude you from accessing tables in other databases. The
- example below accesses the `author' table from the `db1' database and
- the `editor' table from the `db2' database:
- mysql> USE db1;
- mysql> SELECT author_name,editor_name FROM author,db2.editor
- WHERE author.editor_id = db2.editor.editor_id;
- The `USE' statement is provided for Sybase compatibility.
- `FLUSH' Syntax (Clearing Caches)
- ================================
- FLUSH flush_option [,flush_option]
- You should use the `FLUSH' command if you want to clear some of the
- internal caches *MySQL* uses. To execute `FLUSH', you must have the
- *reload* privilege.
- `flush_option' can be any of the following:
- `HOSTS' Empties the host cache tables. You should flush the host
- tables if some of your hosts change IP number or if you get
- the error message `Host ... is blocked'. When more than
- `max_connect_errors' errors occur in a row for a given host
- while connection to the *MySQL* server, *MySQL* assumes
- something is wrong and blocks the host from further
- connection requests. Flushing the host tables allows the
- host to attempt to connect again. *Note Blocked host::.) You
- can start `mysqld' with `-O max_connection_errors=999999999'
- to avoid this error message.
- `LOGS' Closes and reopens all log files. If you have specified the
- update log file or a binary log file without an extension,
- the extension number of the log file will be incremented by
- one relative to the previous file. If you have used an
- extension in the file name, *MySQL* will close and reopen
- the update log file. *Note Update log::.
- `PRIVILEGES'Reloads the privileges from the grant tables in the `mysql'
- database.
- `TABLES' Closes all open tables and force all tables in use to be
- closed.
- `[TABLE | Flushes only the given tables.
- TABLES]
- table_name
- [,table_name...]'
- `TABLES Closes all open tables and locks all tables for all
- WITH READ databases with a read until one executes `UNLOCK TABLES'.
- LOCK' This is very convinient way to get backups if you have a
- file system, like Veritas,that can take snapshots in time.
- `STATUS' Resets most status variables to zero.
- You can also access each of the commands shown above with the
- `mysqladmin' utility, using the `flush-hosts', `flush-logs', `reload',
- or `flush-tables' commands.
- `KILL' Syntax
- =============
- KILL thread_id
- Each connection to `mysqld' runs in a separate thread. You can see
- which threads are running with the `SHOW PROCESSLIST' command and kill
- a thread with the `KILL thread_id' command.
- If you have the *process* privilege, you can see and kill all threads.
- Otherwise, you can see and kill only your own threads.
- You can also use the `mysqladmin processlist' and `mysqladmin kill'
- commands to examine and kill threads.
- When you do a `KILL', a thread specific `kill flag' is set for the
- thread.
- In most cases it may take some time for the thread to die as the kill
- flag is only checked at specific intervals.
- * In `SELECT', `ORDER BY' and `GROUP BY' loops, the flag is checked
- after reading a block of rows. If the kill flag is set the
- statement is aborted
- * When doing an `ALTER TABLE' the kill flag is checked before each
- block of rows are read from the original table. If the kill flag
- was set the command is aborted and the temporary table is deleted.
- * When doing an `UPDATE TABLE' and `DELETE TABLE', the kill flag is
- checked after each block read and after each updated or delete
- row. If the kill flag is set the statement is aborted. Note that
- if you are not using transactions, the changes will not be rolled
- back!
- * `GET_LOCK()' will abort with `NULL'.
- * An `INSERT DELAYED' thread will quickly flush all rows it has in
- memory and die.
- * If the thread is in the table lock handler (state: `Locked'), the
- table lock will be quickly aborted.
- * If the thread is waiting for free disk space in a `write' call, the
- write is aborted with an disk full error message.
- `SHOW' Syntax (Get Information About Tables, Columns,...)
- =========================================================
- SHOW DATABASES [LIKE wild]
- or SHOW [OPEN] TABLES [FROM db_name] [LIKE wild]
- or SHOW [FULL] COLUMNS FROM tbl_name [FROM db_name] [LIKE wild]
- or SHOW INDEX FROM tbl_name [FROM db_name]
- or SHOW TABLE STATUS [FROM db_name] [LIKE wild]
- or SHOW STATUS [LIKE wild]
- or SHOW VARIABLES [LIKE wild]
- or SHOW LOGS
- or SHOW [FULL] PROCESSLIST
- or SHOW GRANTS FOR user
- or SHOW CREATE TABLE table_name
- or SHOW MASTER STATUS
- or SHOW MASTER LOGS
- or SHOW SLAVE STATUS
- `SHOW' provides information about databases, tables, columns, or status
- information about the server. If the `LIKE wild' part is used, the
- `wild' string can be a string that uses the SQL `%' and `_' wild-card
- characters.
- SHOW Information About Databases, Tables, Columns, and Indexes
- --------------------------------------------------------------
- You can use `db_name.tbl_name' as an alternative to the `tbl_name FROM
- db_name' syntax. These two statements are equivalent:
- mysql> SHOW INDEX FROM mytable FROM mydb;
- mysql> SHOW INDEX FROM mydb.mytable;
- `SHOW DATABASES' lists the databases on the *MySQL* server host. You
- can also get this list using the `mysqlshow' command.
- `SHOW TABLES' lists the tables in a given database. You can also get
- this list using the `mysqlshow db_name' command.
- *NOTE:* If a user doesn't have any privileges for a table, the table
- will not show up in the output from `SHOW TABLES' or `mysqlshow
- db_name'.
- `SHOW OPEN TABLES' lists the tables that are currently open in the
- table cache. *Note Table cache::. The `Comment' field tells how many
- times the table is `cached' and `in_use'.
- `SHOW COLUMNS' lists the columns in a given table. If you specify the
- `FULL' option, you will also get the privileges you have for each
- column. If the column types are different than you expect them to be
- based on a `CREATE TABLE' statement, note that *MySQL* sometimes
- changes column types. *Note Silent column changes::.
- The `DESCRIBE' statement provides information similar to `SHOW COLUMNS'.
- *Note `DESCRIBE': DESCRIBE.
- `SHOW FIELDS' is a synonym for `SHOW COLUMNS', and `SHOW KEYS' is a
- synonym for `SHOW INDEX'. You can also list a table's columns or
- indexes with `mysqlshow db_name tbl_name' or `mysqlshow -k db_name
- tbl_name'.
- `SHOW INDEX' returns the index information in a format that closely
- resembles the `SQLStatistics' call in ODBC. The following columns are
- returned:
- *Column* *Meaning*
- `Table' Name of the table.
- `Non_unique' 0 if the index can't contain duplicates.
- `Key_name' Name of the index.
- `Seq_in_index' Column sequence number in index, starting with
- 1.
- `Column_name' Column name.
- `Collation' How the column is sorted in the index. In
- *MySQL*, this can have values `A' (Ascending)
- or `NULL' (Not sorted).
- `Cardinality' Number of unique values in the index. This is
- updated by running `isamchk -a'.
- `Sub_part' Number of indexed characters if the column is
- only partly indexed. `NULL' if the entire key
- is indexed.
- `Comment' Various remarks. For now, it tells whether
- index is FULLTEXT or not.
- Note that as the `Cardinality' is counted based on statistics stored as
- integers, it's not necessarily accurate for small tables.
- SHOW Status Information About Tables
- ------------------------------------
- SHOW TABLE STATUS [FROM db_name] [LIKE wild]
- `SHOW TABLE STATUS' (new in Version 3.23) works likes `SHOW STATUS',
- but provides a lot of information about each table. You can also get
- this list using the `mysqlshow --status db_name' command. The
- following columns are returned:
- *Column* *Meaning*
- `Name' Name of the table.
- `Type' Type of table. *Note Table types::.
- `Row_format' The row storage format (Fixed, Dynamic, or
- Compressed).
- `Rows' Number of rows.
- `Avg_row_length' Average row length.
- `Data_length' Length of the data file.
- `Max_data_length' Max length of the data file.
- `Index_length' Length of the index file.
- `Data_free' Number of allocated but not used bytes.
- `Auto_increment' Next autoincrement value.
- `Create_time' When the table was created.
- `Update_time' When the data file was last updated.
- `Check_time' When the table was last checked.
- `Create_options' Extra options used with `CREATE TABLE'.
- `Comment' The comment used when creating the table (or some
- information why *MySQL* couldn't access the table
- information).
- `INNOBASE' tables will report the free space in the tablespace in the
- table comment.
- SHOW Status Information
- -----------------------
- `SHOW STATUS' provides server status information (like `mysqladmin
- extended-status'). The output resembles that shown below, though the
- format and numbers probably differ:
- +--------------------------+------------+
- | Variable_name | Value |
- +--------------------------+------------+
- | Aborted_clients | 0 |
- | Aborted_connects | 0 |
- | Bytes_received | 155372598 |
- | Bytes_sent | 1176560426 |
- | Connections | 30023 |
- | Created_tmp_disk_tables | 0 |
- | Created_tmp_tables | 8340 |
- | Created_tmp_files | 60 |
- | Delayed_insert_threads | 0 |
- | Delayed_writes | 0 |
- | Delayed_errors | 0 |
- | Flush_commands | 1 |
- | Handler_delete | 462604 |
- | Handler_read_first | 105881 |
- | Handler_read_key | 27820558 |
- | Handler_read_next | 390681754 |
- | Handler_read_prev | 6022500 |
- | Handler_read_rnd | 30546748 |
- | Handler_read_rnd_next | 246216530 |
- | Handler_update | 16945404 |
- | Handler_write | 60356676 |
- | Key_blocks_used | 14955 |
- | Key_read_requests | 96854827 |
- | Key_reads | 162040 |
- | Key_write_requests | 7589728 |
- | Key_writes | 3813196 |
- | Max_used_connections | 0 |
- | Not_flushed_key_blocks | 0 |
- | Not_flushed_delayed_rows | 0 |
- | Open_tables | 1 |
- | Open_files | 2 |
- | Open_streams | 0 |
- | Opened_tables | 44600 |
- | Questions | 2026873 |
- | Select_full_join | 0 |
- | Select_full_range_join | 0 |
- | Select_range | 99646 |
- | Select_range_check | 0 |
- | Select_scan | 30802 |
- | Slave_running | OFF |
- | Slave_open_temp_tables | 0 |
- | Slow_launch_threads | 0 |
- | Slow_queries | 0 |
- | Sort_merge_passes | 30 |
- | Sort_range | 500 |
- | Sort_rows | 30296250 |
- | Sort_scan | 4650 |
- | Table_locks_immediate | 1920382 |
- | Table_locks_waited | 0 |
- | Threads_cached | 0 |
- | Threads_created | 30022 |
- | Threads_connected | 1 |
- | Threads_running | 1 |
- | Uptime | 80380 |
- +--------------------------+------------+
- The status variables listed above have the following meaning:
- *Variable* *Meaning*
- `Aborted_clients' Number of connections aborted because the
- client died without closing the connection
- properly. *Note Communication errors::.
- `Aborted_connects' Number of tries to connect to the *MySQL*
- server that failed. *Note Communication
- errors::.
- `Bytes_received' Number of bytes received from all clients.
- `Bytes_sent' Number of bytes sent to all clients.
- `Connections' Number of connection attempts to the *MySQL*
- server.
- `Created_tmp_disk_tables' Number of implicit temporary tables on disk
- created while executing statements.
- `Created_tmp_tables' Number of implicit temporary tables in memory
- created while executing statements.
- `Created_tmp_files' How many temporary files `mysqld' have created.
- `Delayed_insert_threads' Number of delayed insert handler threads in
- use.
- `Delayed_writes' Number of rows written with `INSERT DELAYED'.
- `Delayed_errors' Number of rows written with `INSERT DELAYED'
- for which some error occurred (probably
- `duplicate key').
- `Flush_commands' Number of executed `FLUSH' commands.
- `Handler_delete' Number of times a row was deleted from a table.
- `Handler_read_first' Number of times the first entry was read from
- an index. If this is high, it suggests that
- the server is doing a lot of full index scans,
- for example, `SELECT col1 FROM foo', assuming
- that col1 is indexed.
- `Handler_read_key' Number of requests to read a row based on a
- key. If this is high, it is a good indication
- that your queries and tables are properly
- indexed.
- `Handler_read_next' Number of requests to read next row in key
- order. This will be incremented if you are
- querying an index column with a range
- contraint. This also will be incremented if
- you are doing an index scan.
- `Handler_read_rnd' Number of requests to read a row based on a
- fixed position. This will be high if you are
- doing a lot of queries that require sorting of
- the result.
- `Handler_read_rnd_next' Number of requests to read the next row in the
- datafile. This will be high if you are doing
- a lot of table scans. Generally this suggests
- that your tables are not properly indexed or
- that your queries are not written to take
- advantage of the indexes you have.
- `Handler_update' Number of requests to update a row in a table.
- `Handler_write' Number of requests to insert a row in a table.
- `Key_blocks_used' The number of used blocks in the key cache.
- `Key_read_requests' The number of requests to read a key block
- from the cache.
- `Key_reads' The number of physical reads of a key block
- from disk.
- `Key_write_requests' The number of requests to write a key block to
- the cache.
- `Key_writes' The number of physical writes of a key block
- to disk.
- `Max_used_connections' The maximum number of connections in use
- simultaneously.
- `Not_flushed_key_blocks' Keys blocks in the key cache that has changed
- but hasn't yet been flushed to disk.
- `Not_flushed_delayed_rows'Number of rows waiting to be written in
- `INSERT DELAY' queues.
- `Open_tables' Number of tables that are open.
- `Open_files' Number of files that are open.
- `Open_streams' Number of streams that are open (used mainly
- for logging).
- `Opened_tables' Number of tables that have been opened.
- `Select_full_join' Number of joins without keys (Should be 0).
- `Select_full_range_join' Number of joins where we used a range search
- on reference table.
- `Select_range' Number of joins where we used ranges on the
- first table. (It's normally not critical even
- if this is big.)
- `Select_scan' Number of joins where we scanned the first
- table.
- `Select_range_check' Number of joins without keys where we check
- for key usage after each row (Should be 0).
- `Questions' Number of queries sent to the server.
- `Slave_open_temp_tables' Number of temporary tables currently open by
- the slave thread
- `Slow_launch_threads' Number of threads that have taken more than
- `slow_launch_time' to connect.
- `Slow_queries' Number of queries that have taken more than
- `long_query_time'. *Note Slow query log::.
- `Sort_merge_passes' Number of merges the sort has to do. If this
- value is large you should consider increasing
- `sort_buffer'.
- `Sort_range' Number of sorts that where done with ranges.
- `Sort_rows' Number of sorted rows.
- `Sort_scan' Number of sorts that where done by scanning
- the table.
- `Table_locks_immediate' Number of times a table lock was acquired
- immediately. Available after 3.23.33.
- `Table_locks_waited' Number of times a table lock could not be
- acquired immediately and a wait was needed. If
- this is high, and you have performance
- problems, you should first optimize your
- queries, and then either split your table(s)
- or use replication. Available after 3.23.33.
- `Threads_cached' Number of threads in the thread cache.
- `Threads_connected' Number of currently open connections.
- `Threads_created' Number of threads created to handle
- connections.
- `Threads_running' Number of threads that are not sleeping.
- `Uptime' How many seconds the server has been up.
- Some comments about the above:
- * If `Opened_tables' is big, then your `table_cache' variable is
- probably too small.
- * If `key_reads' is big, then your `key_cache' is probably too
- small. The cache hit rate can be calculated with
- `key_reads'/`key_read_requests'.
- * If `Handler_read_rnd' is big, then you probably have a lot of
- queries that require *MySQL* to scan whole tables or you have
- joins that don't use keys properly.
- * If `Threads_created' is big, you may want to increase the
- `thread_cache_size' variable.
- SHOW VARIABLES
- --------------
- SHOW VARIABLES [LIKE wild]
- `SHOW VARIABLES' shows the values of some *MySQL* system variables.
- You can also get this information using the `mysqladmin variables'
- command. If the default values are unsuitable, you can set most of
- these variables using command-line options when `mysqld' starts up.
- *Note Command-line options::.
- The output resembles that shown below, though the format and numbers may
- differ somewhat:
- +-------------------------+---------------------------+
- | Variable_name | Value |
- +-------------------------+---------------------------+
- | ansi_mode | OFF |
- | back_log | 50 |
- | basedir | /my/monty/ |
- | bdb_cache_size | 16777216 |
- | bdb_log_buffer_size | 32768 |
- | bdb_home | /my/monty/data/ |
- | bdb_max_lock | 10000 |
- | bdb_logdir | |
- | bdb_shared_data | OFF |
- | bdb_tmpdir | /tmp/ |
- | binlog_cache_size | 32768 |
- | concurrent_insert | ON |
- | connect_timeout | 5 |
- | datadir | /my/monty/data/ |
- | delay_key_write | ON |
- | delayed_insert_limit | 100 |
- | delayed_insert_timeout | 300 |
- | delayed_queue_size | 1000 |
- | flush | OFF |
- | flush_time | 0 |
- | have_bdb | YES |
- | have_gemini | NO |
- | have_innobase | YES |
- | have_raid | YES |
- | have_ssl | NO |
- | init_file | |
- | interactive_timeout | 28800 |
- | join_buffer_size | 131072 |
- | key_buffer_size | 16776192 |
- | language | /my/monty/share/english/ |
- | large_files_support | ON |
- | log | OFF |
- | log_update | OFF |
- | log_bin | OFF |
- | log_slave_updates | OFF |
- | long_query_time | 10 |
- | low_priority_updates | OFF |
- | lower_case_table_names | 0 |
- | max_allowed_packet | 1048576 |
- | max_binlog_cache_size | 4294967295 |
- | max_connections | 100 |
- | max_connect_errors | 10 |
- | max_delayed_threads | 20 |
- | max_heap_table_size | 16777216 |
- | max_join_size | 4294967295 |
- | max_sort_length | 1024 |
- | max_tmp_tables | 32 |
- | max_write_lock_count | 4294967295 |
- | myisam_recover_options | DEFAULT |
- | myisam_sort_buffer_size | 8388608 |
- | net_buffer_length | 16384 |
- | net_read_timeout | 30 |
- | net_retry_count | 10 |
- | net_write_timeout | 60 |
- | open_files_limit | 0 |
- | pid_file | /my/monty/data/donna.pid |
- | port | 3306 |
- | protocol_version | 10 |
- | record_buffer | 131072 |
- | query_buffer_size | 0 |
- | safe_show_database | OFF |
- | server_id | 0 |
- | skip_locking | ON |
- | skip_networking | OFF |
- | skip_show_database | OFF |
- | slow_launch_time | 2 |
- | socket | /tmp/mysql.sock |
- | sort_buffer | 2097116 |
- | table_cache | 64 |
- | table_type | MYISAM |
- | thread_cache_size | 4 |
- | thread_stack | 65536 |
- | tmp_table_size | 1048576 |
- | tmpdir | /tmp/ |
- | version | 3.23.29a-gamma-debug |
- | wait_timeout | 28800 |
- +-------------------------+---------------------------+
- Each option is described below. Values for buffer sizes, lengths, and
- stack sizes are given in bytes. You can specify values with a suffix
- of `K' or `M' to indicate kilobytes or megabytes. For example, `16M'
- indicates 16 megabytes. The case of suffix letters does not matter;
- `16M' and `16m' are equivalent:
- ``ansi_mode'.'
- Is `ON' if `mysqld' was started with `--ansi'. *Note ANSI mode::.
- ``back_log''
- The number of outstanding connection requests *MySQL* can have.
- This comes into play when the main *MySQL* thread gets *VERY* many
- connection requests in a very short time. It then takes some time
- (although very little) for the main thread to check the connection
- and start a new thread. The `back_log' value indicates how many
- requests can be stacked during this short time before *MySQL*
- momentarily stops answering new requests. You need to increase
- this only if you expect a large number of connections in a short
- period of time.
- In other words, this value is the size of the listen queue for
- incoming TCP/IP connections. Your operating system has its own
- limit on the size of this queue. The manual page for the Unix
- `listen(2)' system call should have more details. Check your OS
- documentation for the maximum value for this variable. Attempting
- to set `back_log' higher than your operating system limit will be
- ineffective.
- ``basedir''
- The value of the `--basedir' option.
- ``bdb_cache_size''
- The buffer that is allocated to cache index and rows for `BDB'
- tables. If you don't use `BDB' tables, you should start `mysqld'
- with `--skip-bdb' to not waste memory for this cache.
- ``bdb_log_buffer_size''
- The buffer that is allocated to cache index and rows for `BDB'
- tables. If you don't use `BDB' tables, you should set this to 0 or
- start `mysqld' with `--skip-bdb' to not waste memory for this
- cache.
- ``bdb_home''
- The value of the `--bdb-home' option.
- ``bdb_max_lock''
- The maximum number of locks (1000 by default) you can have active
- on a BDB table. You should increase this if you get errors of type
- `bdb: Lock table is out of available locks' or `Got error 12 from
- ...' when you have do long transactions or when `mysqld' has to
- examine a lot of rows to calculate the query.
- ``bdb_logdir''
- The value of the `--bdb-logdir' option.
- ``bdb_shared_data''
- Is `ON' if you are using `--bdb-shared-data'.
- ``bdb_tmpdir''
- The value of the `--bdb-tmpdir' option.
- ``binlog_cache_size'. The size of the cache to hold the SQL'
- statements for the binary log during a transaction. If you often
- use big, multi-statement transactions you can increase this to get
- more performance. *Note COMMIT::.
- ``character_set''
- The default character set.
- ``character_sets''
- The supported character sets.
- ``concurrent_inserts''
- If `ON' (the default), *MySQL* will allow you to use `INSERT' on
- `MyISAM' tables at the same time as you run `SELECT' queries on
- them. You can turn this option off by starting mysqld with
- `--safe' or `--skip-new'.
- ``connect_timeout''
- The number of seconds the `mysqld' server is waiting for a connect
- packet before responding with `Bad handshake'.
- ``datadir''
- The value of the `--datadir' option.
- ``delay_key_write''
- If enabled (is on by default), *MySQL* will honor the
- `delay_key_write' option `CREATE TABLE'. This means that the key
- buffer for tables with this option will not get flushed on every
- index update, but only when a table is closed. This will speed up
- writes on keys a lot, but you should add automatic checking of all
- tables with `myisamchk --fast --force' if you use this. Note that
- if you start `mysqld' with the `--delay-key-write-for-all-tables'
- option this means that all tables will be treated as if they were
- created with the `delay_key_write' option. You can clear this flag
- by starting `mysqld' with `--skip-new' or `--safe-mode'.
- ``delayed_insert_limit''
- After inserting `delayed_insert_limit' rows, the `INSERT DELAYED'
- handler will check if there are any `SELECT' statements pending.
- If so, it allows these to execute before continuing.
- ``delayed_insert_timeout''
- How long a `INSERT DELAYED' thread should wait for `INSERT'
- statements before terminating.
- ``delayed_queue_size''
- What size queue (in rows) should be allocated for handling `INSERT
- DELAYED'. If the queue becomes full, any client that does `INSERT
- DELAYED' will wait until there is room in the queue again.
- ``flush''
- This is `ON' if you have started *MySQL* with the `--flush' option.
- ``flush_time''
- If this is set to a non-zero value, then every `flush_time'
- seconds all tables will be closed (to free up resources and sync
- things to disk). We only recommend this option on Win95, Win98, or
- on systems where you have very little resources.
- ``have_bdb''
- `YES' if `mysqld' supports Berkeley DB tables. `DISABLED' if
- `--skip-bdb' is used.
- ``have_gemini''
- `YES' if `mysqld' supports Gemini tables. `DISABLED' if
- `--skip-gemini' is used.
- ``have_innobase''
- `YES' if `mysqld' supports Innobase tables. `DISABLED' if
- `--skip-innobase' is used.
- ``have_raid''
- `YES' if `mysqld' supports the `RAID' option.
- ``have_ssl''
- `YES' if `mysqld' supports SSL (encryption) on the client/server
- protocol.
- ``init_file''
- The name of the file specified with the `--init-file' option when
- you start the server. This is a file of SQL statements you want
- the server to execute when it starts.
- ``interactive_timeout''
- The number of seconds the server waits for activity on an
- interactive connection before closing it. An interactive client
- is defined as a client that uses the `CLIENT_INTERACTIVE' option to
- `mysql_real_connect()'. See also `wait_timeout'.
- ``join_buffer_size''
- The size of the buffer that is used for full joins (joins that do
- not use indexes). The buffer is allocated one time for each full
- join between two tables. Increase this value to get a faster full
- join when adding indexes is not possible. (Normally the best way
- to get fast joins is to add indexes.)
- ``key_buffer_size''
- Index blocks are buffered and are shared by all threads.
- `key_buffer_size' is the size of the buffer used for index blocks.
- Increase this to get better index handling (for all reads and
- multiple writes) to as much as you can afford; 64M on a 256M
- machine that mainly runs *MySQL* is quite common. If you,
- however, make this too big (more than 50% of your total memory?)
- your system may start to page and become REALLY slow. Remember
- that because *MySQL* does not cache data read, that you will have
- to leave some room for the OS filesystem cache.
- You can check the performance of the key buffer by doing `show
- status' and examine the variables `Key_read_requests',
- `Key_reads', `Key_write_requests', and `Key_writes'. The
- `Key_reads/Key_read_request' ratio should normally be < 0.01. The
- `Key_write/Key_write_requests' is usually near 1 if you are using
- mostly updates/deletes but may be much smaller if you tend to do
- updates that affect many at the same time or if you are using
- `delay_key_write'. *Note SHOW::.
- To get even more speed when writing many rows at the same time, use
- `LOCK TABLES'. *Note `LOCK TABLES': LOCK TABLES.
- ``language''
- The language used for error messages.
- ``large_file_support''
- If `mysqld' was compiled with options for big file support.
- ``locked_in_memory''
- If `mysqld' was locked in memory with `--memlock'
- ``log''
- If logging of all queries is enabled.
- ``log_update''
- If the update log is enabled.
- ``log_bin''
- If the binary log is enabled.
- ``log_slave_updates''
- If the updates from the slave should be logged.
- ``long_query_time''
- If a query takes longer than this (in seconds), the `Slow_queries'
- counter will be incremented. If you are using
- `--log-slow-queries', the query will be logged to the slow query
- logfile. *Note Slow query log::.
- ``lower_case_table_names''
- If set to 1 table names are stored in lowercase on disk. This will
- enable you to access the table names case-insensitive also on Unix.
- *Note Name case sensitivity::.
- ``max_allowed_packet''
- The maximum size of one packet. The message buffer is initialized
- to `net_buffer_length' bytes, but can grow up to
- `max_allowed_packet' bytes when needed. This value by default is
- small, to catch big (possibly wrong) packets. You must increase
- this value if you are using big `BLOB' columns. It should be as
- big as the biggest `BLOB' you want to use. The current protocol
- limits `max_allowed_packet' to 16M.
- ``max_binlog_cache_size'. If a multi-statement transaction'
- requires more than this amount of memory, one will get the error
- "Multi-statement transaction required more than
- 'max_binlog_cache_size' bytes of storage".
- ``max_binlog_size'. Available after 3.23.33. If a write to the'
- binary ( replication) log exceeds the given value, rotate the
- logs. You cannot set it to less than 1024 bytes, or more than 1
- GB. Default is 1 GB.
- ``max_connections''
- The number of simultaneous clients allowed. Increasing this value
- increases the number of file descriptors that `mysqld' requires.
- See below for comments on file descriptor limits. *Note Too many
- connections::.
- ``max_connect_errors''
- If there is more than this number of interrupted connections from
- a host this host will be blocked from further connections. You
- can unblock a host with the command `FLUSH HOSTS'.
- ``max_delayed_threads''
- Don't start more than this number of threads to handle `INSERT
- DELAYED' statements. If you try to insert data into a new table
- after all `INSERT DELAYED' threads are in use, the row will be
- inserted as if the `DELAYED' attribute wasn't specified.
- ``max_heap_table_size''
- Don't allow creation of heap tables bigger than this.
- ``max_join_size''
- Joins that are probably going to read more than `max_join_size'
- records return an error. Set this value if your users tend to
- perform joins that lack a `WHERE' clause, that take a long time,
- and that return millions of rows.
- ``max_sort_length''
- The number of bytes to use when sorting `BLOB' or `TEXT' values
- (only the first `max_sort_length' bytes of each value are used;
- the rest are ignored).
- ``max_user_connections''
- The maximum number of active connections for a single user (0 = no
- limit).
- ``max_tmp_tables''
- (This option doesn't yet do anything.) Maximum number of
- temporary tables a client can keep open at the same time.
- ``max_write_lock_count''
- After this many write locks, allow some read locks to run in
- between.
- ``myisam_recover_options''
- The value of the `--myisam-recover' option.
- ``myisam_sort_buffer_size''
- The buffer that is allocated when sorting the index when doing a
- `REPAIR' or when creating indexes with `CREATE INDEX' or `ALTER
- TABLE'.
- ``net_buffer_length''
- The communication buffer is reset to this size between queries.
- This should not normally be changed, but if you have very little
- memory, you can set it to the expected size of a query. (That is,
- the expected length of SQL statements sent by clients. If
- statements exceed this length, the buffer is automatically
- enlarged, up to `max_allowed_packet' bytes.)
- ``net_read_timeout''
- Number of seconds to wait for more data from a connection before
- aborting the read. Note that when we don't expect data from a
- connection, the timeout is defined by `write_timeout'.
- ``net_retry_count''
- If a read on a communication port is interrupted, retry this many
- times before giving up. This value should be quite high on
- `FreeBSD' as internal interrupts are sent to all threads.
- ``net_write_timeout''
- Number of seconds to wait for a block to be written to a
- connection before aborting the write.
- ``open_files_limit''
- If this is not 0, then `mysqld' will use this value to reserve file
- descriptors to use with `getrlimit()'. If this value is 0 then
- `mysqld' will reserve `max_connections*5' or `max_connections +
- table_cache*2' (whichever is larger) number of files. You should
- try increasing this if `mysqld' gives you the error 'Too many open
- files'.
- ``pid_file''
- The value of the `--pid-file' option.
- ``port''
- The value of the `--port' option.
- ``protocol_version''
- The protocol version used by the *MySQL* server.
- ``record_buffer''
- Each thread that does a sequential scan allocates a buffer of this
- size for each table it scans. If you do many sequential scans, you
- may want to increase this value.
- ``query_buffer_size''
- The initial allocation of the query buffer. If most of your
- queries are long (like when inserting blobs), you should increase
- this!
- ``safe_show_databases''
- Don't show databases for which the user doesn't have any database
- or table privileges. This can improve security if you're concerned
- about people being able to see what databases other users have.
- See also `skip_show_databases'.
- ``server_id''
- The value of the `--server-id' option.
- ``skip_locking''
- Is OFF if `mysqld' uses external locking.
- ``skip_networking''
- Is ON if we only allow local (socket) connections.
- ``skip_show_databases''
- This prevents people from doing `SHOW DATABASES' if they don't have
- the `PROCESS_PRIV' privilege. This can improve security if you're
- concerned about people being able to see what databases other users
- have. See also `safe_show_databases'.
- ``slow_launch_time''
- If creating the thread takes longer than this value (in seconds),
- the `Slow_launch_threads' counter will be incremented.
- ``socket''
- The Unix socket used by the server.
- ``sort_buffer''
- Each thread that needs to do a sort allocates a buffer of this
- size. Increase this value for faster `ORDER BY' or `GROUP BY'
- operations. *Note Temporary files::.
- ``table_cache''
- The number of open tables for all threads. Increasing this value
- increases the number of file descriptors that `mysqld' requires.
- *MySQL* needs two file descriptors for each unique open table.
- See below for comments on file descriptor limits. You can check if
- you need to increase the table cache by checking the
- `Opened_tables' variable. *Note SHOW::. If this variable is big
- and you don't do `FLUSH TABLES' a lot (which just forces all
- tables to be closed and reopenend), then you should increase the
- value of this variable.
- Make sure that your operating system can handle the number of open
- file descriptors implied by the `table_cache' setting. If
- `table_cache' is set too high, *MySQL* may run out of file
- descriptors and refuse connections, fail to perform queries, and
- be very unreliable.
- For information about how the table cache works, see *Note Table
- cache::.
- ``table_type''
- The default table type
- ``thread_cache_size''
- How many threads we should keep in a cache for reuse. When a
- client disconnects, the client's threads are put in the cache if
- there aren't more than `thread_cache_size' threads from before.
- All new threads are first taken from the cache, and only when the
- cache is empty is a new thread created. This variable can be
- increased to improve performance if you have a lot of new
- connections. (Normally this doesn't give a notable performance
- improvement if you have a good thread implementation.) By examing
- the difference between the `Connections' and `Threads_created' you
- can see how efficient the current thread cache is for you.
- ``thread_concurrency''
- On Solaris, `mysqld' will call `thr_setconcurrency()' with this
- value. `thr_setconcurrency()' permits the application to give the
- threads system a hint for the desired number of threads that should
- be run at the same time.
- ``thread_stack''
- The stack size for each thread. Many of the limits detected by the
- `crash-me' test are dependent on this value. The default is large
- enough for normal operation. *Note Benchmarks::.
- ``timezone''
- The timezone for the server.
- ``tmp_table_size''
- If an in-memory temporary table exceeds this size, *MySQL* will
- automatically convert it to an on-disk `MyISAM' table. Increase
- the value of `tmp_table_size' if you do many advanced `GROUP BY'
- queries and you have lots of memory.
- ``tmpdir''
- The directory used for temporary files and temporary tables.
- ``version''
- The version number for the server.
- ``wait_timeout''
- The number of seconds the server waits for activity on a
- connection before closing it. See also `interactive_timeout'.
- The manual section that describes tuning *MySQL* contains some
- information of how to tune the above variables. *Note Server
- parameters::.
- SHOW Information About Log Files
- --------------------------------
- `SHOW LOGS' shows you status information about existing log files. It
- currently only displays information about Berkeley DB log files.
- * `File' shows the full path to the log file
- * `Type' shows the type of the log file (`BDB' for Berkeley DB log
- files)
- * `Status' shows the status of the log file (`FREE' if the file can
- be removed, or `IN USE' if the file is needed by the transaction
- subsystem)
- SHOW Information About Connected Threads (Processes)
- ----------------------------------------------------
- `SHOW PROCESSLIST' shows you which threads are running. You can also
- get this information using the `mysqladmin processlist' command. If
- you have the *process* privilege, you can see all threads. Otherwise,
- you can see only your own threads. *Note `KILL': KILL. If you don't
- use the `FULL' option, then only the first 100 characters of each query
- will be shown.
- This command is very useful if you get the 'too many connections' error
- message and want to find out what's going on. *MySQL* reserves one
- extra connection for a client with the `Process_priv' privilege to
- ensure that you should always be able to login and check the system
- (assuming you are not giving this privilege to all your users).
- SHOW GRANTS (Privileges) for a User
- -----------------------------------
- `SHOW GRANTS FOR user' lists the grant commands that must be issued to
- duplicate the grants for a user.
- mysql> SHOW GRANTS FOR root@localhost;
- +---------------------------------------------------------------------+
- | Grants for root@localhost |
- +---------------------------------------------------------------------+
- | GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION |
- +---------------------------------------------------------------------+
- SHOW CREATE TABLE
- -----------------
- Shows a `CREATE TABLE' statement that will create the given table:
- mysql> show create table tG
- *************************** 1. row ***************************
- Table: t
- Create Table: CREATE TABLE t (
- id int(11) default NULL auto_increment,
- s char(60) default NULL,
- PRIMARY KEY (id)
- ) TYPE=MyISAM
- `SHOW CREATE TABLE' will quote table and column names according to
- `SQL_QUOTE_SHOW_CREATE' option. *Note `SET OPTION
- SQL_QUOTE_SHOW_CREATE': SET OPTION.
- `EXPLAIN' Syntax (Get Information About a `SELECT')
- ===================================================
- EXPLAIN tbl_name
- or EXPLAIN SELECT select_options
- `EXPLAIN tbl_name' is a synonym for `DESCRIBE tbl_name' or `SHOW
- COLUMNS FROM tbl_name'.
- When you precede a `SELECT' statement with the keyword `EXPLAIN',
- *MySQL* explains how it would process the `SELECT', providing
- information about how tables are joined and in which order.
- With the help of `EXPLAIN', you can see when you must add indexes to
- tables to get a faster `SELECT' that uses indexes to find the records.
- You can also see if the optimizer joins the tables in an optimal order.
- To force the optimizer to use a specific join order for a `SELECT'
- statement, add a `STRAIGHT_JOIN' clause.
- For non-simple joins, `EXPLAIN' returns a row of information for each
- table used in the `SELECT' statement. The tables are listed in the order
- they would be read. *MySQL* resolves all joins using a single-sweep
- multi-join method. This means that *MySQL* reads a row from the first
- table, then finds a matching row in the second table, then in the third
- table and so on. When all tables are processed, it outputs the selected
- columns and backtracks through the table list until a table is found
- for which there are more matching rows. The next row is read from this
- table and the process continues with the next table.
- Output from `EXPLAIN' includes the following columns:
- `table'
- The table to which the row of output refers.
- `type'
- The join type. Information about the various types is given below.
- `possible_keys'
- The `possible_keys' column indicates which indexes *MySQL* could
- use to find the rows in this table. Note that this column is
- totally independent of the order of the tables. That means that
- some of the keys in possible_keys may not be usable in practice
- with the generated table order.
- If this column is empty, there are no relevant indexes. In this
- case, you may be able to improve the performance of your query by
- examining the `WHERE' clause to see if it refers to some column or
- columns that would be suitable for indexing. If so, create an
- appropriate index and check the query with `EXPLAIN' again. *Note
- ALTER TABLE::.
- To see what indexes a table has, use `SHOW INDEX FROM tbl_name'.
- `key'
- The `key' column indicates the key that *MySQL* actually decided
- to use. The key is `NULL' if no index was chosen. If *MySQL*
- chooses the wrong index, you can probably force *MySQL* to use
- another index by using `myisamchk --analyze', *Note myisamchk
- syntax::, or by using `USE INDEX/IGNORE INDEX'. *Note JOIN::.
- `key_len'
- The `key_len' column indicates the length of the key that *MySQL*
- decided to use. The length is `NULL' if the `key' is `NULL'. Note
- that this tells us how many parts of a multi-part key *MySQL* will
- actually use.
- `ref'
- The `ref' column shows which columns or constants are used with the
- `key' to select rows from the table.
- `rows'
- The `rows' column indicates the number of rows *MySQL* believes it
- must examine to execute the query.
- `Extra'
- This column contains additional information of how *MySQL* will
- resolve the query. Here is an explanation of the different text
- strings that can be found in this column:
- `Distinct'
- *MySQL* will not continue searching for more rows for the
- current row combination after it has found the first matching
- row.
- `Not exists'
- *MySQL* was able to do a `LEFT JOIN' optimization on the
- query and will not examine more rows in this table for a row
- combination after it finds one row that matches the `LEFT
- JOIN' criteria.
- ``range checked for each record (index map: #)''
- *MySQL* didn't find a real good index to use. It will,
- instead, for each row combination in the preceding tables, do
- a check on which index to use (if any), and use this index to
- retrieve the rows from the table. This isn't very fast but
- is faster than having to do a join without an index.
- `Using filesort'
- *MySQL* will need to do an extra pass to find out how to
- retrieve the rows in sorted order. The sort is done by going
- through all rows according to the `join type' and storing the
- sort key + pointer to the row for all rows that match the
- `WHERE'. Then the keys are sorted. Finally the rows are
- retrieved in sorted order.
- `Using index'
- The column information is retrieved from the table using only
- information in the index tree without having to do an
- additional seek to read the actual row. This can be done
- when all the used columns for the table are part of the same
- index.
- `Using temporary'
- To resolve the query *MySQL* will need to create a temporary
- table to hold the result. This typically happens if you do an
- `ORDER BY' on a different column set than you did a `GROUP
- BY' on.
- `Where used'
- A `WHERE' clause will be used to restrict which rows will be
- matched against the next table or sent to the client. If you
- don't have this information and the table is of type `ALL' or
- `index', you may have something wrong in your query (if you
- don't intend to fetch/examine all rows from the table).
- If you want to get your queries as fast as possible, you should
- look out for `Using filesort' and `Using temporary'.
- The different join types are listed below, ordered from best to worst
- type:
- `system'
- The table has only one row (= system table). This is a special
- case of the `const' join type.
- `const'
- The table has at most one matching row, which will be read at the
- start of the query. Because there is only one row, values from the
- column in this row can be regarded as constants by the rest of the
- optimizer. `const' tables are very fast as they are read only once!
- `eq_ref'
- One row will be read from this table for each combination of rows
- from the previous tables. This is the best possible join type,
- other than the `const' types. It is used when all parts of an
- index are used by the join and the index is `UNIQUE' or a `PRIMARY
- KEY'.
- `ref'
- All rows with matching index values will be read from this table
- for each combination of rows from the previous tables. `ref' is
- used if the join uses only a leftmost prefix of the key, or if the
- key is not `UNIQUE' or a `PRIMARY KEY' (in other words, if the
- join cannot select a single row based on the key value). If the
- key that is used matches only a few rows, this join type is good.
- `range'
- Only rows that are in a given range will be retrieved, using an
- index to select the rows. The `key' column indicates which index
- is used. The `key_len' contains the longest key part that was
- used. The `ref' column will be NULL for this type.
- `index'
- This is the same as `ALL', except that only the index tree is
- scanned. This is usually faster than `ALL', as the index file is
- usually smaller than the data file.
- `ALL'
- A full table scan will be done for each combination of rows from
- the previous tables. This is normally not good if the table is
- the first table not marked `const', and usually *very* bad in all
- other cases. You normally can avoid `ALL' by adding more indexes,
- so that the row can be retrieved based on constant values or
- column values from earlier tables.
- You can get a good indication of how good a join is by multiplying all
- values in the `rows' column of the `EXPLAIN' output. This should tell
- you roughly how many rows *MySQL* must examine to execute the query.
- This number is also used when you restrict queries with the
- `max_join_size' variable. *Note Server parameters::.
- The following example shows how a `JOIN' can be optimized progressively
- using the information provided by `EXPLAIN'.
- Suppose you have the `SELECT' statement shown below, that you examine
- using `EXPLAIN':
- EXPLAIN SELECT tt.TicketNumber, tt.TimeIn,
- tt.ProjectReference, tt.EstimatedShipDate,
- tt.ActualShipDate, tt.ClientID,
- tt.ServiceCodes, tt.RepetitiveID,
- tt.CurrentProcess, tt.CurrentDPPerson,
- tt.RecordVolume, tt.DPPrinted, et.COUNTRY,
- et_1.COUNTRY, do.CUSTNAME
- FROM tt, et, et AS et_1, do
- WHERE tt.SubmitTime IS NULL
- AND tt.ActualPC = et.EMPLOYID
- AND tt.AssignedPC = et_1.EMPLOYID
- AND tt.ClientID = do.CUSTNMBR;
- For this example, assume that:
- * The columns being compared have been declared as follows:
- *Table* *Column* *Column type*
- `tt' `ActualPC' `CHAR(10)'
- `tt' `AssignedPC' `CHAR(10)'
- `tt' `ClientID' `CHAR(10)'
- `et' `EMPLOYID' `CHAR(15)'
- `do' `CUSTNMBR' `CHAR(15)'
- * The tables have the indexes shown below:
- *Table* *Index*
- `tt' `ActualPC'
- `tt' `AssignedPC'
- `tt' `ClientID'
- `et' `EMPLOYID' (primary key)
- `do' `CUSTNMBR' (primary key)
- * The `tt.ActualPC' values aren't evenly distributed.
- Initially, before any optimizations have been performed, the `EXPLAIN'
- statement produces the following information:
- table type possible_keys key key_len ref rows Extra
- et ALL PRIMARY NULL NULL NULL 74
- do ALL PRIMARY NULL NULL NULL 2135
- et_1 ALL PRIMARY NULL NULL NULL 74
- tt ALL AssignedPC,ClientID,ActualPC NULL NULL NULL 3872
- range checked for each record (key map: 35)
- Because `type' is `ALL' for each table, this output indicates that
- *MySQL* is doing a full join for all tables! This will take quite a
- long time, as the product of the number of rows in each table must be
- examined! For the case at hand, this is `74 * 2135 * 74 * 3872 =
- 45,268,558,720' rows. If the tables were bigger, you can only imagine
- how long it would take.
- One problem here is that *MySQL* can't (yet) use indexes on columns
- efficiently if they are declared differently. In this context,
- `VARCHAR' and `CHAR' are the same unless they are declared as different
- lengths. Because `tt.ActualPC' is declared as `CHAR(10)' and
- `et.EMPLOYID' is declared as `CHAR(15)', there is a length mismatch.
- To fix this disparity between column lengths, use `ALTER TABLE' to
- lengthen `ActualPC' from 10 characters to 15 characters:
- mysql> ALTER TABLE tt MODIFY ActualPC VARCHAR(15);
- Now `tt.ActualPC' and `et.EMPLOYID' are both `VARCHAR(15)'. Executing
- the `EXPLAIN' statement again produces this result:
- table type possible_keys key key_len ref rows Extra
- tt ALL AssignedPC,ClientID,ActualPC NULL NULL NULL 3872 where used
- do ALL PRIMARY NULL NULL NULL 2135
- range checked for each record (key map: 1)
- et_1 ALL PRIMARY NULL NULL NULL 74
- range checked for each record (key map: 1)
- et eq_ref PRIMARY PRIMARY 15 tt.ActualPC 1
- This is not perfect, but is much better (the product of the `rows'
- values is now less by a factor of 74). This version is executed in a
- couple of seconds.
- A second alteration can be made to eliminate the column length
- mismatches for the `tt.AssignedPC = et_1.EMPLOYID' and `tt.ClientID =
- do.CUSTNMBR' comparisons:
- mysql> ALTER TABLE tt MODIFY AssignedPC VARCHAR(15),
- MODIFY ClientID VARCHAR(15);
- Now `EXPLAIN' produces the output shown below:
- table type possible_keys key key_len ref rows Extra
- et ALL PRIMARY NULL NULL NULL 74
- tt ref AssignedPC,ClientID,ActualPC ActualPC 15 et.EMPLOYID 52 where used
- et_1 eq_ref PRIMARY PRIMARY 15 tt.AssignedPC 1
- do eq_ref PRIMARY PRIMARY 15 tt.ClientID 1
- This is almost as good as it can get.
- The remaining problem is that, by default, *MySQL* assumes that values
- in the `tt.ActualPC' column are evenly distributed, and that isn't the
- case for the `tt' table. Fortunately, it is easy to tell *MySQL* about
- this:
- shell> myisamchk --analyze PATH_TO_MYSQL_DATABASE/tt
- shell> mysqladmin refresh
- Now the join is perfect, and `EXPLAIN' produces this result:
- table type possible_keys key key_len ref rows Extra
- tt ALL AssignedPC,ClientID,ActualPC NULL NULL NULL 3872 where used
- et eq_ref PRIMARY PRIMARY 15 tt.ActualPC 1
- et_1 eq_ref PRIMARY PRIMARY 15 tt.AssignedPC 1
- do eq_ref PRIMARY PRIMARY 15 tt.ClientID 1
- Note that the `rows' column in the output from `EXPLAIN' is an educated
- guess from the *MySQL* join optimizer. To optimize a query, you should
- check if the numbers are even close to the truth. If not, you may get
- better performance by using `STRAIGHT_JOIN' in your `SELECT' statement
- and trying to list the tables in a different order in the `FROM' clause.
- `DESCRIBE' Syntax (Get Information About Columns)
- =================================================
- {DESCRIBE | DESC} tbl_name {col_name | wild}
- `DESCRIBE' provides information about a table's columns. `col_name'
- may be a column name or a string containing the SQL `%' and `_'
- wild-card characters.
- If the column types are different than you expect them to be based on a
- `CREATE TABLE' statement, note that *MySQL* sometimes changes column
- types. *Note Silent column changes::.
- This statement is provided for Oracle compatibility.
- The `SHOW' statement provides similar information. *Note `SHOW': SHOW.
- `BEGIN/COMMIT/ROLLBACK' Syntax
- ==============================
- By default, *MySQL* runs in `autocommit' mode. This means that as soon
- as you execute an update, *MySQL* will store the update on disk.
- If you are using transactions safe tables (like `BDB', `INNOBASE' or
- `GEMINI'), you can put *MySQL* into non-`autocommit' mode with the
- following command:
- SET AUTOCOMMIT=0
- After this you must use `COMMIT' to store your changes to disk or
- `ROLLBACK' if you want to ignore the changes you have made since the
- beginning of your transaction.
- If you want to switch from `AUTOCOMMIT' mode for one series of
- statements, you can use the `BEGIN' or `BEGIN WORK' statement:
- BEGIN;
- SELECT @A:=SUM(salary) FROM table1 WHERE type=1;
- UPDATE table2 SET summmary=@A WHERE type=1;
- COMMIT;
- Note that if you are using non-transaction-safe tables, the changes
- will be stored at once, independent of the status of the `autocommit'
- mode.
- If you do a `ROLLBACK' when you have updated a non-transactional table
- you will get an error (`ER_WARNING_NOT_COMPLETE_ROLLBACK') as a
- warning. All transactional safe tables will be restored but any
- non-transactional table will not change.
- If you are using `BEGIN' or `SET AUTO_COMMIT=0', you should use the
- *MySQL* binary log for backups instead of the old update log; The
- transaction is stored in the binary log in one chunk, during `COMMIT',
- the to ensure and `ROLLBACK':ed transactions are not stored. *Note
- Binary log::.
- The following commands automaticly ends an transaction (as if you had
- done a `COMMIT' before executing the command):
- `ALTER TABLE' `BEGIN' `CREATE INDEX'
- `DROP DATABASE' `DROP TABLE' `RENAME TABLE'
- `TRUNCATE'
- `LOCK TABLES/UNLOCK TABLES' Syntax
- ==================================
- LOCK TABLES tbl_name [AS alias] {READ | [READ LOCAL] | [LOW_PRIORITY] WRITE}
- [, tbl_name {READ | [LOW_PRIORITY] WRITE} ...]
- ...
- UNLOCK TABLES
- `LOCK TABLES' locks tables for the current thread. `UNLOCK TABLES'
- releases any locks held by the current thread. All tables that are
- locked by the current thread are automatically unlocked when the thread
- issues another `LOCK TABLES', or when the connection to the server is
- closed.
- If a thread obtains a `READ' lock on a table, that thread (and all other
- threads) can only read from the table. If a thread obtains a `WRITE'
- lock on a table, then only the thread holding the lock can `READ' from
- or `WRITE' to the table. Other threads are blocked.
- The difference between `READ LOCAL' and `READ' is that `READ LOCAL'
- allows non-conflicting `INSERT' statements to execute while the lock is
- held. This can't however be used if you are going to manipulate the
- database files outside *MySQL* while you hold the lock.
- Each thread waits (without timing out) until it obtains all the locks
- it has requested.
- `WRITE' locks normally have higher priority than `READ' locks, to
- ensure that updates are processed as soon as possible. This means that
- if one thread obtains a `READ' lock and then another thread requests a
- `WRITE' lock, subsequent `READ' lock requests will wait until the
- `WRITE' thread has gotten the lock and released it. You can use
- `LOW_PRIORITY WRITE' locks to allow other threads to obtain `READ'
- locks while the thread is waiting for the `WRITE' lock. You should only
- use `LOW_PRIORITY WRITE' locks if you are sure that there will
- eventually be a time when no threads will have a `READ' lock.
- When you use `LOCK TABLES', you must lock all tables that you are going
- to use and you must use the same alias that you are going to use in
- your queries! If you are using a table multiple times in a query (with
- aliases), you must get a lock for each alias! This policy ensures that
- table locking is deadlock free and makes the locking code smaller,
- simpler and much faster.
- Note that you should *NOT* lock any tables that you are using with
- `INSERT DELAYED'. This is because that in this case the `INSERT' is
- done by a separate thread.
- Normally, you don't have to lock tables, as all single `UPDATE'
- statements are atomic; no other thread can interfere with any other
- currently executing SQL statement. There are a few cases when you would
- like to lock tables anyway:
- * If you are going to run many operations on a bunch of tables, it's
- much faster to lock the tables you are going to use. The downside
- is, of course, that no other thread can update a `READ'-locked
- table and no other thread can read a `WRITE'-locked table.
- * *MySQL* doesn't support a transaction environment, so you must use
- `LOCK TABES' if you want to ensure that no other thread comes
- between a `SELECT' and an `UPDATE'. The example shown below
- requires `LOCK TABLES' in order to execute safely:
- mysql> LOCK TABLES trans READ, customer WRITE;
- mysql> select sum(value) from trans where customer_id= some_id;
- mysql> update customer set total_value=sum_from_previous_statement
- where customer_id=some_id;
- mysql> UNLOCK TABLES;
- Without `LOCK TABLES', there is a chance that another thread might
- insert a new row in the `trans' table between execution of the
- `SELECT' and `UPDATE' statements.
- By using incremental updates (`UPDATE customer SET
- value=value+new_value') or the `LAST_INSERT_ID()' function, you can
- avoid using `LOCK TABLES' in many cases.
- You can also solve some cases by using the user-level lock functions
- `GET_LOCK()' and `RELEASE_LOCK()'. These locks are saved in a hash
- table in the server and implemented with `pthread_mutex_lock()' and
- `pthread_mutex_unlock()' for high speed. *Note Miscellaneous
- functions::.
- See *Note Internal locking::, for more information on locking policy.
- You can also lock all tables in all databases with read locks with the
- `FLUSH TABLES WITH READ LOCK' command. *Note FLUSH::. This is very
- convinient way to get backups if you have a file system, like Veritas,
- that can take snapshots in time.
- *NOTE*: `LOCK TABLES' is not transaction safe and will automaticly
- commit any active transactions before attempting to lock the tables.
- `SET' Syntax
- ============
- SET [OPTION] SQL_VALUE_OPTION= value, ...
- `SET OPTION' sets various options that affect the operation of the
- server or your client. Any option you set remains in effect until the
- current session ends, or until you set the option to a different value.
- `CHARACTER SET character_set_name | DEFAULT'
- This maps all strings from and to the client with the given
- mapping. Currently the only option for `character_set_name' is