SQL_BEGIN | = | 'BEGIN'.freeze |
SQL_COMMIT | = | 'COMMIT'.freeze |
SQL_RELEASE_SAVEPOINT | = | 'RELEASE SAVEPOINT autopoint_%d'.freeze |
SQL_ROLLBACK | = | 'ROLLBACK'.freeze |
SQL_ROLLBACK_TO_SAVEPOINT | = | 'ROLLBACK TO SAVEPOINT autopoint_%d'.freeze |
SQL_SAVEPOINT | = | 'SAVEPOINT autopoint_%d'.freeze |
TRANSACTION_BEGIN | = | 'Transaction.begin'.freeze |
TRANSACTION_COMMIT | = | 'Transaction.commit'.freeze |
TRANSACTION_ROLLBACK | = | 'Transaction.rollback'.freeze |
TRANSACTION_ISOLATION_LEVELS | = | {:uncommitted=>'READ UNCOMMITTED'.freeze, :committed=>'READ COMMITTED'.freeze, :repeatable=>'REPEATABLE READ'.freeze, :serializable=>'SERIALIZABLE'.freeze} |
POSTGRES_DEFAULT_RE | = | /\A(?:B?('.*')::[^']+|\((-?\d+(?:\.\d+)?)\))\z/ |
MSSQL_DEFAULT_RE | = | /\A(?:\(N?('.*')\)|\(\((-?\d+(?:\.\d+)?)\)\))\z/ |
MYSQL_TIMESTAMP_RE | = | /\ACURRENT_(?:DATE|TIMESTAMP)?\z/ |
STRING_DEFAULT_RE | = | /\A'(.*)'\z/ |
cache_schema | [RW] | Whether the schema should be cached for this database. True by default for performance, can be set to false to always issue a database query to get the schema. |
prepared_statements | [R] | The prepared statement object hash for this database, keyed by name symbol |
transaction_isolation_level | [RW] | The default transaction isolation level for this database, used for all future transactions. For MSSQL, this should be set to something if you ever plan to use the :isolation option to Database#transaction, as on MSSQL if affects all future transactions on the same connection. |
Call the prepared statement with the given name with the given hash of arguments.
DB[:items].filter(:id=>1).prepare(:first, :sa) DB.call(:sa) # SELECT * FROM items WHERE id = 1
# File lib/sequel/database/query.rb, line 58 58: def call(ps_name, hash={}, &block) 59: prepared_statement(ps_name).call(hash, &block) 60: end
Executes the given SQL on the database. This method should be overridden in descendants. This method should not be called directly by user code.
# File lib/sequel/database/query.rb, line 64 64: def execute(sql, opts={}) 65: raise NotImplemented, "#execute should be overridden by adapters" 66: end
Method that should be used when submitting any DDL (Data Definition Language) SQL, such as create_table. By default, calls execute_dui. This method should not be called directly by user code.
# File lib/sequel/database/query.rb, line 71 71: def execute_ddl(sql, opts={}, &block) 72: execute_dui(sql, opts, &block) 73: end
Method that should be used when issuing a INSERT statement. By default, calls execute_dui. This method should not be called directly by user code.
# File lib/sequel/database/query.rb, line 85 85: def execute_insert(sql, opts={}, &block) 86: execute_dui(sql, opts, &block) 87: end
Returns an array of hashes containing foreign key information from the table. Each hash will contain at least the following fields:
:columns : | An array of columns in the given table |
:table : | The table referenced by the columns |
:key : | An array of columns referenced (in the table specified by :table), but can be nil on certain adapters if the primary key is referenced. |
The hash may also contain entries for:
:deferrable : | Whether the constraint is deferrable |
:name : | The name of the constraint |
:on_delete : | The action to take ON DELETE |
:on_update : | The action to take ON UPDATE |
# File lib/sequel/database/query.rb, line 103 103: def foreign_key_list(table, opts={}) 104: raise NotImplemented, "#foreign_key_list should be overridden by adapters" 105: end
Return a hash containing index information for the table. Hash keys are index name symbols. Values are subhashes with two keys, :columns and :unique. The value of :columns is an array of symbols of column names. The value of :unique is true or false depending on if the index is unique.
Should not include the primary key index, functional indexes, or partial indexes.
DB.indexes(:artists) # => {:artists_name_ukey=>{:columns=>[:name], :unique=>true}}
# File lib/sequel/database/query.rb, line 125 125: def indexes(table, opts={}) 126: raise NotImplemented, "#indexes should be overridden by adapters" 127: end
Returns the schema for the given table as an array with all members being arrays of length 2, the first member being the column name, and the second member being a hash of column information. The table argument can also be a dataset, as long as it only has one table. Available options are:
:reload : | Ignore any cached results, and get fresh information from the database. |
:schema : | An explicit schema to use. It may also be implicitly provided via the table name. |
If schema parsing is supported by the database, the column information should hash at least contain the following entries:
:allow_null : | Whether NULL is an allowed value for the column. |
:db_type : | The database type for the column, as a database specific string. |
:default : | The database default for the column, as a database specific string. |
:primary_key : | Whether the columns is a primary key column. If this column is not present, it means that primary key information is unavailable, not that the column is not a primary key. |
:ruby_default : | The database default for the column, as a ruby object. In many cases, complex database defaults cannot be parsed into ruby objects, in which case nil will be used as the value. |
:type : | A symbol specifying the type, such as :integer or :string. |
Example:
DB.schema(:artists) # [[:id, # {:type=>:integer, # :primary_key=>true, # :default=>"nextval('artist_id_seq'::regclass)", # :ruby_default=>nil, # :db_type=>"integer", # :allow_null=>false}], # [:name, # {:type=>:string, # :primary_key=>false, # :default=>nil, # :ruby_default=>nil, # :db_type=>"text", # :allow_null=>false}]]
# File lib/sequel/database/query.rb, line 179 179: def schema(table, opts={}) 180: raise(Error, 'schema parsing is not implemented on this database') unless respond_to?(:schema_parse_table, true) 181: 182: opts = opts.dup 183: if table.is_a?(Dataset) 184: o = table.opts 185: from = o[:from] 186: raise(Error, "can only parse the schema for a dataset with a single from table") unless from && from.length == 1 && !o.include?(:join) && !o.include?(:sql) 187: tab = table.first_source_table 188: sch, table_name = schema_and_table(tab) 189: quoted_name = table.literal(tab) 190: opts[:dataset] = table 191: else 192: sch, table_name = schema_and_table(table) 193: quoted_name = quote_schema_table(table) 194: end 195: opts[:schema] = sch if sch && !opts.include?(:schema) 196: 197: Sequel.synchronize{@schemas.delete(quoted_name)} if opts[:reload] 198: return Sequel.synchronize{@schemas[quoted_name]} if @schemas[quoted_name] 199: 200: cols = schema_parse_table(table_name, opts) 201: raise(Error, 'schema parsing returned no columns, table probably doesn\'t exist') if cols.nil? || cols.empty? 202: cols.each{|_,c| c[:ruby_default] = column_schema_to_ruby_default(c[:default], c[:type])} 203: Sequel.synchronize{@schemas[quoted_name] = cols} if cache_schema 204: cols 205: end
Returns true if a table with the given name exists. This requires a query to the database.
DB.table_exists?(:foo) # => false # SELECT NULL FROM foo LIMIT 1
Note that since this does a SELECT from the table, it can give false negatives if you don‘t have permission to SELECT from the table.
# File lib/sequel/database/query.rb, line 215 215: def table_exists?(name) 216: sch, table_name = schema_and_table(name) 217: name = SQL::QualifiedIdentifier.new(sch, table_name) if sch 218: _table_exists?(from(name)) 219: true 220: rescue DatabaseError 221: false 222: end
Starts a database transaction. When a database transaction is used, either all statements are successful or none of the statements are successful. Note that MySQL MyISAM tabels do not support transactions.
The following general options are respected:
:isolation : | The transaction isolation level to use for this transaction, should be :uncommitted, :committed, :repeatable, or :serializable, used if given and the database/adapter supports customizable transaction isolation levels. |
:prepare : | A string to use as the transaction identifier for a prepared transaction (two-phase commit), if the database/adapter supports prepared transactions. |
:rollback : | Can the set to :reraise to reraise any Sequel::Rollback exceptions raised, or :always to always rollback even if no exceptions occur (useful for testing). |
:server : | The server to use for the transaction. |
:savepoint : | Whether to create a new savepoint for this transaction, only respected if the database/adapter supports savepoints. By default Sequel will reuse an existing transaction, so if you want to use a savepoint you must use this option. |
PostgreSQL specific options:
:deferrable : | (9.1+) If present, set to DEFERRABLE if true or NOT DEFERRABLE if false. |
:read_only : | If present, set to READ ONLY if true or READ WRITE if false. |
:synchronous : | if non-nil, set synchronous_commit appropriately. Valid values true, :on, false, :off, :local (9.1+), and :remote_write (9.2+). |
# File lib/sequel/database/query.rb, line 260 260: def transaction(opts={}, &block) 261: synchronize(opts[:server]) do |conn| 262: return yield(conn) if already_in_transaction?(conn, opts) 263: _transaction(conn, opts, &block) 264: end 265: end
This methods affect relating to the logging of executed SQL.
log_warn_duration | [RW] | Numeric specifying the duration beyond which queries are logged at warn level instead of info level. |
loggers | [RW] | Array of SQL loggers to use for this database. |
sql_log_level | [RW] | Log level at which to log SQL queries. This is actually the method sent to the logger, so it should be the method name symbol. The default is :info, it can be set to :debug to log at DEBUG level. |
Log a message at error level, with information about the exception.
# File lib/sequel/database/logging.rb, line 21 21: def log_exception(exception, message) 22: log_each(:error, "#{exception.class}: #{exception.message.strip}: #{message}") 23: end
Log a message at level info to all loggers.
# File lib/sequel/database/logging.rb, line 26 26: def log_info(message, args=nil) 27: log_each(:info, args ? "#{message}; #{args.inspect}" : message) 28: end
Yield to the block, logging any errors at error level to all loggers, and all other queries with the duration at warn or info level.
# File lib/sequel/database/logging.rb, line 32 32: def log_yield(sql, args=nil) 33: return yield if @loggers.empty? 34: sql = "#{sql}; #{args.inspect}" if args 35: start = Time.now 36: begin 37: yield 38: rescue => e 39: log_exception(e, sql) 40: raise 41: ensure 42: log_duration(Time.now - start, sql) unless e 43: end 44: end
These methods all return instances of this database‘s dataset class.
Returns a dataset for the database. If the first argument is a string, the method acts as an alias for Database#fetch, returning a dataset for arbitrary SQL, with or without placeholders:
DB['SELECT * FROM items'].all DB['SELECT * FROM items WHERE name = ?', my_name].all
Otherwise, acts as an alias for Database#from, setting the primary table for the dataset:
DB[:items].sql #=> "SELECT * FROM items"
# File lib/sequel/database/dataset.rb, line 19 19: def [](*args) 20: (String === args.first) ? fetch(*args) : from(*args) 21: end
Fetches records for an arbitrary SQL statement. If a block is given, it is used to iterate over the records:
DB.fetch('SELECT * FROM items'){|r| p r}
The fetch method returns a dataset instance:
DB.fetch('SELECT * FROM items').all
fetch can also perform parameterized queries for protection against SQL injection:
DB.fetch('SELECT * FROM items WHERE name = ?', my_name).all
# File lib/sequel/database/dataset.rb, line 44 44: def fetch(sql, *args, &block) 45: ds = dataset.with_sql(sql, *args) 46: ds.each(&block) if block 47: ds 48: end
Returns a new dataset with the from method invoked. If a block is given, it is used as a filter on the dataset.
DB.from(:items) # SELECT * FROM items DB.from(:items){id > 2} # SELECT * FROM items WHERE (id > 2)
# File lib/sequel/database/dataset.rb, line 55 55: def from(*args, &block) 56: ds = dataset.from(*args) 57: block ? ds.filter(&block) : ds 58: end
Returns a new dataset with the select method invoked.
DB.select(1) # SELECT 1 DB.select{server_version{}} # SELECT server_version() DB.select(:id).from(:items) # SELECT id FROM items
# File lib/sequel/database/dataset.rb, line 65 65: def select(*args, &block) 66: dataset.select(*args, &block) 67: end
AUTOINCREMENT | = | 'AUTOINCREMENT'.freeze | ||
COMMA_SEPARATOR | = | ', '.freeze | ||
NOT_NULL | = | ' NOT NULL'.freeze | ||
NULL | = | ' NULL'.freeze | ||
PRIMARY_KEY | = | ' PRIMARY KEY'.freeze | ||
TEMPORARY | = | 'TEMPORARY '.freeze | ||
UNDERSCORE | = | '_'.freeze | ||
UNIQUE | = | ' UNIQUE'.freeze | ||
UNSIGNED | = | ' UNSIGNED'.freeze | ||
COLUMN_DEFINITION_ORDER | = | [:collate, :default, :null, :unique, :primary_key, :auto_increment, :references] | The order of column modifiers to use when defining a column. | |
DEFAULT_JOIN_TABLE_COLUMN_OPTIONS | = | {:null=>false} | The default options for join table columns. |
Adds a column to the specified table. This method expects a column name, a datatype and optionally a hash with additional constraints and options:
DB.add_column :items, :name, :text, :unique => true, :null => false DB.add_column :items, :category, :text, :default => 'ruby'
See alter_table.
# File lib/sequel/database/schema_methods.rb, line 31 31: def add_column(table, *args) 32: alter_table(table) {add_column(*args)} 33: end
Adds an index to a table for the given columns:
DB.add_index :posts, :title DB.add_index :posts, [:author, :title], :unique => true
Options:
:ignore_errors : | Ignore any DatabaseErrors that are raised |
See alter_table.
# File lib/sequel/database/schema_methods.rb, line 44 44: def add_index(table, columns, options={}) 45: e = options[:ignore_errors] 46: begin 47: alter_table(table){add_index(columns, options)} 48: rescue DatabaseError 49: raise unless e 50: end 51: end
Alters the given table with the specified block. Example:
DB.alter_table :items do add_column :category, :text, :default => 'ruby' drop_column :category rename_column :cntr, :counter set_column_type :value, :float set_column_default :value, :float add_index [:group, :category] drop_index [:group, :category] end
Note that add_column accepts all the options available for column definitions using create_table, and add_index accepts all the options available for index definition.
See Schema::AlterTableGenerator and the "Migrations and Schema Modification" guide.
# File lib/sequel/database/schema_methods.rb, line 70 70: def alter_table(name, generator=nil, &block) 71: generator ||= alter_table_generator(&block) 72: remove_cached_schema(name) 73: apply_alter_table(name, generator.operations) 74: nil 75: end
Return a new Schema::AlterTableGenerator instance with the receiver as the database and the given block.
# File lib/sequel/database/schema_methods.rb, line 79 79: def alter_table_generator(&block) 80: alter_table_generator_class.new(self, &block) 81: end
Create a join table using a hash of foreign keys to referenced table names. Example:
create_join_table(:cat_id=>:cats, :dog_id=>:dogs) # CREATE TABLE cats_dogs ( # cat_id integer NOT NULL REFERENCES cats, # dog_id integer NOT NULL REFERENCES dogs, # PRIMARY KEY (cat_id, dog_id) # ) # CREATE INDEX cats_dogs_dog_id_cat_id_index ON cats_dogs(dog_id, cat_id)
The primary key and index are used so that almost all operations on the table can benefit from one of the two indexes, and the primary key ensures that entries in the table are unique, which is the typical desire for a join table.
You can provide column options by making the values in the hash be option hashes, so long as the option hashes have a :table entry giving the table referenced:
create_join_table(:cat_id=>{:table=>:cats, :type=>Bignum}, :dog_id=>:dogs)
You can provide a second argument which is a table options hash:
create_join_table({:cat_id=>:cats, :dog_id=>:dogs}, :temp=>true)
Some table options are handled specially:
:index_options : | The options to pass to the index |
:name : | The name of the table to create |
:no_index : | Set to true not to create the second index. |
:no_primary_key : | Set to true to not create the primary key. |
# File lib/sequel/database/schema_methods.rb, line 115 115: def create_join_table(hash, options={}) 116: keys = hash.keys.sort_by{|k| k.to_s} 117: create_table(join_table_name(hash, options), options) do 118: keys.each do |key| 119: v = hash[key] 120: unless v.is_a?(Hash) 121: v = {:table=>v} 122: end 123: v = DEFAULT_JOIN_TABLE_COLUMN_OPTIONS.merge(v) 124: foreign_key(key, v) 125: end 126: primary_key(keys) unless options[:no_primary_key] 127: index(keys.reverse, options[:index_options] || {}) unless options[:no_index] 128: end 129: end
Creates a view, replacing it if it already exists:
DB.create_or_replace_view(:cheap_items, "SELECT * FROM items WHERE price < 100") DB.create_or_replace_view(:ruby_items, DB[:items].filter(:category => 'ruby'))
# File lib/sequel/database/schema_methods.rb, line 201 201: def create_or_replace_view(name, source) 202: source = source.sql if source.is_a?(Dataset) 203: execute_ddl("CREATE OR REPLACE VIEW #{quote_schema_table(name)} AS #{source}") 204: remove_cached_schema(name) 205: nil 206: end
Creates a table with the columns given in the provided block:
DB.create_table :posts do primary_key :id column :title, :text String :content index :title end
General options:
:as : | Create the table using the value, which should be either a dataset or a literal SQL string. If this option is used, a block should not be given to the method. |
:ignore_index_errors : | Ignore any errors when creating indexes. |
:temp : | Create the table as a temporary table. |
MySQL specific options:
:charset : | The character set to use for the table. |
:collate : | The collation to use for the table. |
:engine : | The table engine to use for the table. |
See Schema::Generator and the "Schema Modification" guide.
# File lib/sequel/database/schema_methods.rb, line 153 153: def create_table(name, options={}, &block) 154: remove_cached_schema(name) 155: options = {:generator=>options} if options.is_a?(Schema::CreateTableGenerator) 156: if sql = options[:as] 157: raise(Error, "can't provide both :as option and block to create_table") if block 158: create_table_as(name, sql, options) 159: else 160: generator = options[:generator] || create_table_generator(&block) 161: create_table_from_generator(name, generator, options) 162: create_table_indexes_from_generator(name, generator, options) 163: nil 164: end 165: end
Forcibly create a table, attempting to drop it if it already exists, then creating it.
DB.create_table!(:a){Integer :a} # SELECT NULL FROM a LIMIT 1 -- check existence # DROP TABLE a -- drop table if already exists # CREATE TABLE a (a integer)
# File lib/sequel/database/schema_methods.rb, line 173 173: def create_table!(name, options={}, &block) 174: drop_table?(name) 175: create_table(name, options, &block) 176: end
Creates the table unless the table already exists.
DB.create_table?(:a){Integer :a} # SELECT NULL FROM a LIMIT 1 -- check existence # CREATE TABLE a (a integer) -- if it doesn't already exist
# File lib/sequel/database/schema_methods.rb, line 183 183: def create_table?(name, options={}, &block) 184: if supports_create_table_if_not_exists? 185: create_table(name, options.merge(:if_not_exists=>true), &block) 186: elsif !table_exists?(name) 187: create_table(name, options, &block) 188: end 189: end
Return a new Schema::CreateTableGenerator instance with the receiver as the database and the given block.
# File lib/sequel/database/schema_methods.rb, line 193 193: def create_table_generator(&block) 194: create_table_generator_class.new(self, &block) 195: end
Creates a view based on a dataset or an SQL string:
DB.create_view(:cheap_items, "SELECT * FROM items WHERE price < 100") DB.create_view(:ruby_items, DB[:items].filter(:category => 'ruby'))
# File lib/sequel/database/schema_methods.rb, line 212 212: def create_view(name, source) 213: source = source.sql if source.is_a?(Dataset) 214: execute_ddl("CREATE VIEW #{quote_schema_table(name)} AS #{source}") 215: end
Removes a column from the specified table:
DB.drop_column :items, :category
See alter_table.
# File lib/sequel/database/schema_methods.rb, line 222 222: def drop_column(table, *args) 223: alter_table(table) {drop_column(*args)} 224: end
Removes an index for the given table and column/s:
DB.drop_index :posts, :title DB.drop_index :posts, [:author, :title]
See alter_table.
# File lib/sequel/database/schema_methods.rb, line 232 232: def drop_index(table, columns, options={}) 233: alter_table(table){drop_index(columns, options)} 234: end
Drop the join table that would have been created with the same arguments to create_join_table:
drop_join_table(:cat_id=>:cats, :dog_id=>:dogs) # DROP TABLE cats_dogs
# File lib/sequel/database/schema_methods.rb, line 241 241: def drop_join_table(hash, options={}) 242: drop_table(join_table_name(hash, options), options) 243: end
Drops one or more tables corresponding to the given names:
DB.drop_table(:posts) # DROP TABLE posts DB.drop_table(:posts, :comments) DB.drop_table(:posts, :comments, :cascade=>true)
# File lib/sequel/database/schema_methods.rb, line 250 250: def drop_table(*names) 251: options = names.last.is_a?(Hash) ? names.pop : {} 252: names.each do |n| 253: execute_ddl(drop_table_sql(n, options)) 254: remove_cached_schema(n) 255: end 256: nil 257: end
Drops the table if it already exists. If it doesn‘t exist, does nothing.
DB.drop_table?(:a) # SELECT NULL FROM a LIMIT 1 -- check existence # DROP TABLE a -- if it already exists
# File lib/sequel/database/schema_methods.rb, line 265 265: def drop_table?(*names) 266: options = names.last.is_a?(Hash) ? names.pop : {} 267: if supports_drop_table_if_exists? 268: options = options.merge(:if_exists=>true) 269: names.each do |name| 270: drop_table(name, options) 271: end 272: else 273: names.each do |name| 274: drop_table(name, options) if table_exists?(name) 275: end 276: end 277: end
Drops one or more views corresponding to the given names:
DB.drop_view(:cheap_items) DB.drop_view(:cheap_items, :pricey_items) DB.drop_view(:cheap_items, :pricey_items, :cascade=>true)
# File lib/sequel/database/schema_methods.rb, line 284 284: def drop_view(*names) 285: options = names.last.is_a?(Hash) ? names.pop : {} 286: names.each do |n| 287: execute_ddl(drop_view_sql(n, options)) 288: remove_cached_schema(n) 289: end 290: nil 291: end
Renames a column in the specified table. This method expects the current column name and the new column name:
DB.rename_column :items, :cntr, :counter
See alter_table.
# File lib/sequel/database/schema_methods.rb, line 310 310: def rename_column(table, *args) 311: alter_table(table) {rename_column(*args)} 312: end
Renames a table:
DB.tables #=> [:items] DB.rename_table :items, :old_items DB.tables #=> [:old_items]
# File lib/sequel/database/schema_methods.rb, line 298 298: def rename_table(name, new_name) 299: execute_ddl(rename_table_sql(name, new_name)) 300: remove_cached_schema(name) 301: nil 302: end
Sets the default value for the given column in the given table:
DB.set_column_default :items, :category, 'perl!'
See alter_table.
# File lib/sequel/database/schema_methods.rb, line 319 319: def set_column_default(table, *args) 320: alter_table(table) {set_column_default(*args)} 321: end
Set the data type for the given column in the given table:
DB.set_column_type :items, :price, :float
See alter_table.
# File lib/sequel/database/schema_methods.rb, line 328 328: def set_column_type(table, *args) 329: alter_table(table) {set_column_type(*args)} 330: end
This methods involve the Database‘s connection pool.
ADAPTERS | = | %w'ado amalgalite db2 dbi do firebird ibmdb informix jdbc mock mysql mysql2 odbc openbase oracle postgres sqlite swift tinytds'.collect{|x| x.to_sym} | Array of supported database adapters |
The Database subclass for the given adapter scheme. Raises Sequel::AdapterNotFound if the adapter could not be loaded.
# File lib/sequel/database/connecting.rb, line 17 17: def self.adapter_class(scheme) 18: return scheme if scheme.is_a?(Class) 19: 20: scheme = scheme.to_s.gsub('-', '_').to_sym 21: 22: unless klass = ADAPTER_MAP[scheme] 23: # attempt to load the adapter file 24: begin 25: Sequel.tsk_require "sequel/adapters/#{scheme}" 26: rescue LoadError => e 27: raise Sequel.convert_exception_class(e, AdapterNotFound) 28: end 29: 30: # make sure we actually loaded the adapter 31: unless klass = ADAPTER_MAP[scheme] 32: raise AdapterNotFound, "Could not load #{scheme} adapter: adapter class not registered in ADAPTER_MAP" 33: end 34: end 35: klass 36: end
Connects to a database. See Sequel.connect.
# File lib/sequel/database/connecting.rb, line 44 44: def self.connect(conn_string, opts = {}) 45: case conn_string 46: when String 47: if match = /\A(jdbc|do):/o.match(conn_string) 48: c = adapter_class(match[1].to_sym) 49: opts = opts.merge(:orig_opts=>opts.dup) 50: opts = {:uri=>conn_string}.merge(opts) 51: else 52: uri = URI.parse(conn_string) 53: scheme = uri.scheme 54: scheme = :dbi if scheme =~ /\Adbi-/ 55: c = adapter_class(scheme) 56: uri_options = c.send(:uri_to_options, uri) 57: uri.query.split('&').collect{|s| s.split('=')}.each{|k,v| uri_options[k.to_sym] = v if k && !k.empty?} unless uri.query.to_s.strip.empty? 58: uri_options.to_a.each{|k,v| uri_options[k] = URI.unescape(v) if v.is_a?(String)} 59: opts = opts.merge(:orig_opts=>opts.dup) 60: opts[:uri] = conn_string 61: opts = uri_options.merge(opts) 62: opts[:adapter] = scheme 63: end 64: when Hash 65: opts = conn_string.merge(opts) 66: opts = opts.merge(:orig_opts=>opts.dup) 67: c = adapter_class(opts[:adapter_class] || opts[:adapter] || opts['adapter']) 68: else 69: raise Error, "Sequel::Database.connect takes either a Hash or a String, given: #{conn_string.inspect}" 70: end 71: # process opts a bit 72: opts = opts.inject({}) do |m, (k,v)| 73: k = :user if k.to_s == 'username' 74: m[k.to_sym] = v 75: m 76: end 77: begin 78: db = c.new(opts) 79: db.test_connection if opts[:test] && db.send(:typecast_value_boolean, opts[:test]) 80: result = yield(db) if block_given? 81: ensure 82: if block_given? 83: db.disconnect if db 84: Sequel.synchronize{::Sequel::DATABASES.delete(db)} 85: end 86: end 87: block_given? ? result : db 88: end
Returns the scheme symbol for this instance‘s class, which reflects which adapter is being used. In some cases, this can be the same as the database_type (for native adapters), in others (i.e. adapters with subadapters), it will be different.
Sequel.connect('jdbc:postgres://...').adapter_scheme # => :jdbc
# File lib/sequel/database/connecting.rb, line 124 124: def adapter_scheme 125: self.class.adapter_scheme 126: end
Dynamically add new servers or modify server options at runtime. Also adds new servers to the connection pool. Intended for use with master/slave or shard configurations where it is useful to add new server hosts at runtime.
servers argument should be a hash with server name symbol keys and hash or proc values. If a servers key is already in use, it‘s value is overridden with the value provided.
DB.add_servers(:f=>{:host=>"hash_host_f"})
# File lib/sequel/database/connecting.rb, line 137 137: def add_servers(servers) 138: if h = @opts[:servers] 139: Sequel.synchronize{h.merge!(servers)} 140: @pool.add_servers(servers.keys) 141: end 142: end
Connects to the database. This method should be overridden by descendants.
# File lib/sequel/database/connecting.rb, line 145 145: def connect(server) 146: raise NotImplemented, "#connect should be overridden by adapters" 147: end
The database type for this database object, the same as the adapter scheme by default. Should be overridden in adapters (especially shared adapters) to be the correct type, so that even if two separate Database objects are using different adapters you can tell that they are using the same database type. Even better, you can tell that two Database objects that are using the same adapter are connecting to different database types (think JDBC or DataObjects).
Sequel.connect('jdbc:postgres://...').database_type # => :postgres
# File lib/sequel/database/connecting.rb, line 158 158: def database_type 159: adapter_scheme 160: end
Disconnects all available connections from the connection pool. Any connections currently in use will not be disconnected. Options:
:servers : | Should be a symbol specifing the server to disconnect from, or an array of symbols to specify multiple servers. |
Example:
DB.disconnect # All servers DB.disconnect(:servers=>:server1) # Single server DB.disconnect(:servers=>[:server1, :server2]) # Multiple servers
# File lib/sequel/database/connecting.rb, line 172 172: def disconnect(opts = {}) 173: pool.disconnect(opts) 174: end
Yield a new Database instance for every server in the connection pool. Intended for use in sharded environments where there is a need to make schema modifications (DDL queries) on each shard.
DB.each_server{|db| db.create_table(:users){primary_key :id; String :name}}
# File lib/sequel/database/connecting.rb, line 181 181: def each_server(&block) 182: servers.each{|s| self.class.connect(server_opts(s), &block)} 183: end
Dynamically remove existing servers from the connection pool. Intended for use with master/slave or shard configurations where it is useful to remove existing server hosts at runtime.
servers should be symbols or arrays of symbols. If a nonexistent server is specified, it is ignored. If no servers have been specified for this database, no changes are made. If you attempt to remove the :default server, an error will be raised.
DB.remove_servers(:f1, :f2)
# File lib/sequel/database/connecting.rb, line 195 195: def remove_servers(*servers) 196: if h = @opts[:servers] 197: servers.flatten.each{|s| Sequel.synchronize{h.delete(s)}} 198: @pool.remove_servers(servers) 199: end 200: end
Returns true if the database is using a single-threaded connection pool.
# File lib/sequel/database/connecting.rb, line 211 211: def single_threaded? 212: @single_threaded 213: end
Acquires a database connection, yielding it to the passed block. This is useful if you want to make sure the same connection is used for all database queries in the block. It is also useful if you want to gain direct access to the underlying connection object if you need to do something Sequel does not natively support.
If a server option is given, acquires a connection for that specific server, instead of the :default server.
DB.synchronize do |conn| ... end
# File lib/sequel/database/connecting.rb, line 228 228: def synchronize(server=nil) 229: @pool.hold(server || :default){|conn| yield conn} 230: end
# File lib/sequel/database/connecting.rb, line 232 232: def synchronize(server=nil, &block) 233: @pool.hold(server || :default, &block) 234: end
Attempts to acquire a database connection. Returns true if successful. Will probably raise an Error if unsuccessful. If a server argument is given, attempts to acquire a database connection to the given server/shard.
# File lib/sequel/database/connecting.rb, line 241 241: def test_connection(server=nil) 242: synchronize(server){|conn|} 243: true 244: end
This methods change the default behavior of this database‘s datasets.
DatasetClass | = | Sequel::Dataset | The default class to use for datasets |
dataset_class | [R] | The class to use for creating datasets. Should respond to new with the Database argument as the first argument, and an optional options hash. |
default_schema | [RW] | The default schema to use, generally should be nil. This sets the default schema used for some schema modification and introspection queries, but does not effect most dataset code. |
If the database has any dataset modules associated with it, use a subclass of the given class that includes the modules as the dataset class.
# File lib/sequel/database/dataset_defaults.rb, line 61 61: def dataset_class=(c) 62: unless @dataset_modules.empty? 63: c = Class.new(c) 64: @dataset_modules.each{|m| c.send(:include, m)} 65: end 66: @dataset_class = c 67: end
Equivalent to extending all datasets produced by the database with a module. What it actually does is use a subclass of the current dataset_class as the new dataset_class, and include the module in the subclass. Instead of a module, you can provide a block that is used to create an anonymous module.
This allows you to override any of the dataset methods even if they are defined directly on the dataset class that this Database object uses.
Examples:
# Introspec columns for all of DB's datasets DB.extend_datasets(Sequel::ColumnsIntrospection) # Trace all SELECT queries by printing the SQL and the full backtrace DB.extend_datasets do def fetch_rows(sql) puts sql puts caller super end end
# File lib/sequel/database/dataset_defaults.rb, line 91 91: def extend_datasets(mod=nil, &block) 92: raise(Error, "must provide either mod or block, not both") if mod && block 93: reset_schema_utility_dataset 94: mod = Module.new(&block) if block 95: if @dataset_modules.empty? 96: @dataset_modules = [mod] 97: @dataset_class = Class.new(@dataset_class) 98: else 99: @dataset_modules << mod 100: end 101: @dataset_class.send(:include, mod) 102: end
The method to call on identifiers going into the database
# File lib/sequel/database/dataset_defaults.rb, line 105 105: def identifier_input_method 106: case @identifier_input_method 107: when nil 108: @identifier_input_method = @opts.fetch(:identifier_input_method, (@@identifier_input_method.nil? ? identifier_input_method_default : @@identifier_input_method)) 109: @identifier_input_method == "" ? nil : @identifier_input_method 110: when "" 111: nil 112: else 113: @identifier_input_method 114: end 115: end
Set the method to call on identifiers going into the database:
DB[:items] # SELECT * FROM items DB.identifier_input_method = :upcase DB[:items] # SELECT * FROM ITEMS
# File lib/sequel/database/dataset_defaults.rb, line 122 122: def identifier_input_method=(v) 123: reset_schema_utility_dataset 124: @identifier_input_method = v || "" 125: end
The method to call on identifiers coming from the database
# File lib/sequel/database/dataset_defaults.rb, line 128 128: def identifier_output_method 129: case @identifier_output_method 130: when nil 131: @identifier_output_method = @opts.fetch(:identifier_output_method, (@@identifier_output_method.nil? ? identifier_output_method_default : @@identifier_output_method)) 132: @identifier_output_method == "" ? nil : @identifier_output_method 133: when "" 134: nil 135: else 136: @identifier_output_method 137: end 138: end
Set the method to call on identifiers coming from the database:
DB[:items].first # {:id=>1, :name=>'foo'} DB.identifier_output_method = :upcase DB[:items].first # {:ID=>1, :NAME=>'foo'}
# File lib/sequel/database/dataset_defaults.rb, line 145 145: def identifier_output_method=(v) 146: reset_schema_utility_dataset 147: @identifier_output_method = v || "" 148: end
Set whether to quote identifiers (columns and tables) for this database:
DB[:items] # SELECT * FROM items DB.quote_identifiers = true DB[:items] # SELECT * FROM "items"
# File lib/sequel/database/dataset_defaults.rb, line 155 155: def quote_identifiers=(v) 156: reset_schema_utility_dataset 157: @quote_identifiers = v 158: end
Returns true if the database quotes identifiers.
# File lib/sequel/database/dataset_defaults.rb, line 161 161: def quote_identifiers? 162: return @quote_identifiers unless @quote_identifiers.nil? 163: @quote_identifiers = @opts.fetch(:quote_identifiers, (@@quote_identifiers.nil? ? quote_identifiers_default : @@quote_identifiers)) 164: end
These methods don‘t fit neatly into another category.
EXTENSIONS | = | {} | Hash of extension name symbols to callable objects to load the extension into the Database object (usually by extending it with a module defined in the extension). | |
LEADING_ZERO_RE | = | /\A0+(\d)/.freeze | Used for checking/removing leading zeroes from strings so they don‘t get interpreted as octal. | |
LEADING_ZERO_REP | = | "\\1".freeze | Replacement string when replacing leading zeroes. | |
DISCONNECT_ERROR_RE | = | /terminating connection due to administrator command/ | ||
MYSQL_DATABASE_DISCONNECT_ERRORS | = | /\A(Commands out of sync; you can't run this command now|Can't connect to local MySQL server through socket|MySQL server has gone away|Lost connection to MySQL server during query)/ | Mysql::Error messages that indicate the current connection should be disconnected | |
AFFECTED_ROWS_RE | = | /Rows matched:\s+(\d+)\s+Changed:\s+\d+\s+Warnings:\s+\d+/.freeze | Regular expression used for getting accurate number of rows matched by an update statement. |
conversion_procs | [R] | Hash of conversion procs for the current database |
conversion_procs | [R] | The conversion procs to use for this database |
convert_invalid_date_time | [R] | By default, Sequel raises an exception if in invalid date or time is used. However, if this is set to nil or :nil, the adapter treats dates like 0000-00-00 and times like 838:00:00 as nil values. If set to :string, it returns the strings as is. |
convert_tinyint_to_bool | [R] | Whether to convert tinyint columns to bool for the current database |
convert_types | [RW] | Whether to convert some Java types to ruby types when retrieving rows. True by default, can be set to false to roughly double performance when fetching rows. |
database_type | [R] | The type of database we are connecting to |
driver | [R] | The Java database driver we are using |
opts | [R] | The options hash for this database |
swift_class | [RW] | The Swift adapter class being used by this database. Connections in this database‘s connection pool will be instances of this class. |
timezone | [W] | Set the timezone to use for this database, overridding Sequel.database_timezone. |
Call the DATABASE_SETUP proc directly after initialization, so the object always uses sub adapter specific code. Also, raise an error immediately if the connection doesn‘t have a uri, since JDBC requires one.
# File lib/sequel/adapters/jdbc.rb, line 158 158: def initialize(opts) 159: super 160: @connection_prepared_statements = {} 161: @connection_prepared_statements_mutex = Mutex.new 162: @convert_types = typecast_value_boolean(@opts.fetch(:convert_types, true)) 163: raise(Error, "No connection string specified") unless uri 164: 165: resolved_uri = jndi? ? get_uri_from_jndi : uri 166: 167: if match = /\Ajdbc:([^:]+)/.match(resolved_uri) and prok = DATABASE_SETUP[match[1].to_sym] 168: @driver = prok.call(self) 169: end 170: end
Constructs a new instance of a database connection with the specified options hash.
Accepts the following options:
:default_schema : | The default schema to use, see default_schema. |
:disconnection_proc : | A proc used to disconnect the connection |
:identifier_input_method : | A string method symbol to call on identifiers going into the database |
:identifier_output_method : | A string method symbol to call on identifiers coming from the database |
:logger : | A specific logger to use |
:loggers : | An array of loggers to use |
:quote_identifiers : | Whether to quote identifiers |
:servers : | A hash specifying a server/shard specific options, keyed by shard symbol |
:single_threaded : | Whether to use a single-threaded connection pool |
:sql_log_level : | Method to use to log SQL to a logger, :info by default. |
All options given are also passed to the connection pool. If a block is given, it is used as the connection_proc for the ConnectionPool.
# File lib/sequel/database/misc.rb, line 64 64: def initialize(opts = {}, &block) 65: @opts ||= opts 66: @opts = connection_pool_default_options.merge(@opts) 67: @loggers = Array(@opts[:logger]) + Array(@opts[:loggers]) 68: self.log_warn_duration = @opts[:log_warn_duration] 69: @opts[:disconnection_proc] ||= proc{|conn| disconnect_connection(conn)} 70: block ||= proc{|server| connect(server)} 71: @opts[:servers] = {} if @opts[:servers].is_a?(String) 72: @opts[:adapter_class] = self.class 73: 74: @opts[:single_threaded] = @single_threaded = typecast_value_boolean(@opts.fetch(:single_threaded, @@single_threaded)) 75: @schemas = {} 76: @default_schema = @opts.fetch(:default_schema, default_schema_default) 77: @prepared_statements = {} 78: @transactions = {} 79: @identifier_input_method = nil 80: @identifier_output_method = nil 81: @quote_identifiers = nil 82: @timezone = nil 83: @dataset_class = dataset_class_default 84: @cache_schema = typecast_value_boolean(@opts.fetch(:cache_schema, true)) 85: @dataset_modules = [] 86: self.sql_log_level = @opts[:sql_log_level] ? @opts[:sql_log_level].to_sym : :info 87: @pool = ConnectionPool.get_pool(@opts, &block) 88: 89: Sequel.synchronize{::Sequel::DATABASES.push(self)} 90: end
# File lib/sequel/adapters/mysql.rb, line 65 65: def initialize(opts={}) 66: super 67: @conversion_procs = MYSQL_TYPES.dup 68: self.convert_tinyint_to_bool = Sequel::MySQL.convert_tinyint_to_bool 69: self.convert_invalid_date_time = Sequel::MySQL.convert_invalid_date_time 70: end
# File lib/sequel/adapters/sqlite.rb, line 94 94: def initialize(opts={}) 95: super 96: @conversion_procs = SQLITE_TYPES.dup 97: @conversion_procs['datetime'] = @conversion_procs['timestamp'] = method(:to_application_timestamp) 98: set_integer_booleans 99: end
Call the DATABASE_SETUP proc directly after initialization, so the object always uses sub adapter specific code. Also, raise an error immediately if the connection doesn‘t have a db_type specified, since one is required to include the correct subadapter.
# File lib/sequel/adapters/swift.rb, line 45 45: def initialize(opts) 46: super 47: if db_type = opts[:db_type] and !db_type.to_s.empty? 48: if prok = DATABASE_SETUP[db_type.to_s.to_sym] 49: prok.call(self) 50: else 51: raise(Error, "No :db_type option specified") 52: end 53: else 54: raise(Error, ":db_type option not valid, should be postgres, mysql, or sqlite") 55: end 56: end
Call the DATABASE_SETUP proc directly after initialization, so the object always uses sub adapter specific code. Also, raise an error immediately if the connection doesn‘t have a uri, since DataObjects requires one.
# File lib/sequel/adapters/do.rb, line 51 51: def initialize(opts) 52: super 53: raise(Error, "No connection string specified") unless uri 54: if prok = DATABASE_SETUP[subadapter.to_sym] 55: prok.call(self) 56: end 57: end
Register an extension callback for Database objects. ext should be the extension name symbol, and mod should either be a Module that the database is extended with, or a callable object called with the database object. If mod is not provided, a block can be provided and is treated as the mod object.
# File lib/sequel/database/misc.rb, line 18 18: def self.register_extension(ext, mod=nil, &block) 19: if mod 20: raise(Error, "cannot provide both mod and block to Database.register_extension") if block 21: if mod.is_a?(Module) 22: block = proc{|db| db.extend(mod)} 23: else 24: block = mod 25: end 26: end 27: Sequel.synchronize{EXTENSIONS[ext] = block} 28: end
If a transaction is not currently in process, yield to the block immediately. Otherwise, add the block to the list of blocks to call after the currently in progress transaction commits (and only if it commits). Options:
:server : | The server/shard to use. |
# File lib/sequel/database/misc.rb, line 97 97: def after_commit(opts={}, &block) 98: raise Error, "must provide block to after_commit" unless block 99: synchronize(opts[:server]) do |conn| 100: if h = _trans(conn) 101: raise Error, "cannot call after_commit in a prepared transaction" if h[:prepare] 102: (h[:after_commit] ||= []) << block 103: else 104: yield 105: end 106: end 107: end
If a transaction is not currently in progress, ignore the block. Otherwise, add the block to the list of the blocks to call after the currently in progress transaction rolls back (and only if it rolls back). Options:
:server : | The server/shard to use. |
# File lib/sequel/database/misc.rb, line 114 114: def after_rollback(opts={}, &block) 115: raise Error, "must provide block to after_rollback" unless block 116: synchronize(opts[:server]) do |conn| 117: if h = _trans(conn) 118: raise Error, "cannot call after_rollback in a prepared transaction" if h[:prepare] 119: (h[:after_rollback] ||= []) << block 120: end 121: end 122: end
Execute the given stored procedure with the give name. If a block is given, the stored procedure should return rows.
# File lib/sequel/adapters/jdbc.rb, line 174 174: def call_sproc(name, opts = {}) 175: args = opts[:args] || [] 176: sql = "{call #{name}(#{args.map{'?'}.join(',')})}" 177: synchronize(opts[:server]) do |conn| 178: cps = conn.prepareCall(sql) 179: 180: i = 0 181: args.each{|arg| set_ps_arg(cps, arg, i+=1)} 182: 183: begin 184: if block_given? 185: yield log_yield(sql){cps.executeQuery} 186: else 187: case opts[:type] 188: when :insert 189: log_yield(sql){cps.executeUpdate} 190: last_insert_id(conn, opts) 191: else 192: log_yield(sql){cps.executeUpdate} 193: end 194: end 195: rescue NativeException, JavaSQL::SQLException => e 196: raise_error(e) 197: ensure 198: cps.close 199: end 200: end 201: end
Setup a DataObjects::Connection to the database.
# File lib/sequel/adapters/do.rb, line 60 60: def connect(server) 61: setup_connection(::DataObjects::Connection.new(uri(server_opts(server)))) 62: end
Connect to the database. Since SQLite is a file based database, the only options available are :database (to specify the database name), and :timeout, to specify how long to wait for the database to be available if it is locked, given in milliseconds (default is 5000).
# File lib/sequel/adapters/sqlite.rb, line 105 105: def connect(server) 106: opts = server_opts(server) 107: opts[:database] = ':memory:' if blank_object?(opts[:database]) 108: db = ::SQLite3::Database.new(opts[:database]) 109: db.busy_timeout(opts.fetch(:timeout, 5000)) 110: 111: connection_pragmas.each{|s| log_yield(s){db.execute_batch(s)}} 112: 113: class << db 114: attr_reader :prepared_statements 115: end 116: db.instance_variable_set(:@prepared_statements, {}) 117: 118: db 119: end
Connect to the database using JavaSQL::DriverManager.getConnection.
# File lib/sequel/adapters/jdbc.rb, line 204 204: def connect(server) 205: opts = server_opts(server) 206: conn = if jndi? 207: get_connection_from_jndi 208: else 209: args = [uri(opts)] 210: args.concat([opts[:user], opts[:password]]) if opts[:user] && opts[:password] 211: begin 212: JavaSQL::DriverManager.setLoginTimeout(opts[:login_timeout]) if opts[:login_timeout] 213: JavaSQL::DriverManager.getConnection(*args) 214: rescue JavaSQL::SQLException, NativeException, StandardError => e 215: raise e unless driver 216: # If the DriverManager can't get the connection - use the connect 217: # method of the driver. (This happens under Tomcat for instance) 218: props = java.util.Properties.new 219: if opts && opts[:user] && opts[:password] 220: props.setProperty("user", opts[:user]) 221: props.setProperty("password", opts[:password]) 222: end 223: opts[:jdbc_properties].each{|k,v| props.setProperty(k.to_s, v)} if opts[:jdbc_properties] 224: begin 225: c = driver.new.connect(args[0], props) 226: raise(Sequel::DatabaseError, 'driver.new.connect returned nil: probably bad JDBC connection string') unless c 227: c 228: rescue JavaSQL::SQLException, NativeException, StandardError => e2 229: e.message << "\n#{e2.class.name}: #{e2.message}" 230: raise e 231: end 232: end 233: end 234: setup_connection(conn) 235: end
Connect to the database. In addition to the usual database options, the following options have effect:
# File lib/sequel/adapters/mysql.rb, line 93 93: def connect(server) 94: opts = server_opts(server) 95: conn = Mysql.init 96: conn.options(Mysql::READ_DEFAULT_GROUP, opts[:config_default_group] || "client") 97: conn.options(Mysql::OPT_LOCAL_INFILE, opts[:config_local_infile]) if opts.has_key?(:config_local_infile) 98: conn.ssl_set(opts[:sslkey], opts[:sslcert], opts[:sslca], opts[:sslcapath], opts[:sslcipher]) if opts[:sslca] || opts[:sslkey] 99: if encoding = opts[:encoding] || opts[:charset] 100: # Set encoding before connecting so that the mysql driver knows what 101: # encoding we want to use, but this can be overridden by READ_DEFAULT_GROUP. 102: conn.options(Mysql::SET_CHARSET_NAME, encoding) 103: end 104: if read_timeout = opts[:read_timeout] and defined? Mysql::OPT_READ_TIMEOUT 105: conn.options(Mysql::OPT_READ_TIMEOUT, read_timeout) 106: end 107: if connect_timeout = opts[:connect_timeout] and defined? Mysql::OPT_CONNECT_TIMEOUT 108: conn.options(Mysql::OPT_CONNECT_TIMEOUT, connect_timeout) 109: end 110: conn.real_connect( 111: opts[:host] || 'localhost', 112: opts[:user], 113: opts[:password], 114: opts[:database], 115: (opts[:port].to_i if opts[:port]), 116: opts[:socket], 117: Mysql::CLIENT_MULTI_RESULTS + 118: Mysql::CLIENT_MULTI_STATEMENTS + 119: (opts[:compress] == false ? 0 : Mysql::CLIENT_COMPRESS) 120: ) 121: sqls = mysql_connection_setting_sqls 122: 123: # Set encoding a slightly different way after connecting, 124: # in case the READ_DEFAULT_GROUP overrode the provided encoding. 125: # Doesn't work across implicit reconnects, but Sequel doesn't turn on 126: # that feature. 127: sqls.unshift("SET NAMES #{literal(encoding.to_s)}") if encoding 128: 129: sqls.each{|sql| log_yield(sql){conn.query(sql)}} 130: 131: add_prepared_statements_cache(conn) 132: conn 133: end
Create an instance of swift_class for the given options.
# File lib/sequel/adapters/swift.rb, line 59 59: def connect(server) 60: opts = server_opts(server) 61: opts[:pass] = opts[:password] 62: setup_connection(swift_class.new(opts)) 63: end
Modify the type translators for the date, time, and timestamp types depending on the value given.
# File lib/sequel/adapters/mysql.rb, line 137 137: def convert_invalid_date_time=(v) 138: m0 = ::Sequel.method(:string_to_time) 139: @conversion_procs[11] = (v != false) ? lambda{|v| convert_date_time(v, &m0)} : m0 140: m1 = ::Sequel.method(:string_to_date) 141: m = (v != false) ? lambda{|v| convert_date_time(v, &m1)} : m1 142: [10, 14].each{|i| @conversion_procs[i] = m} 143: m2 = method(:to_application_timestamp) 144: m = (v != false) ? lambda{|v| convert_date_time(v, &m2)} : m2 145: [7, 12].each{|i| @conversion_procs[i] = m} 146: @convert_invalid_date_time = v 147: end
Modify the type translator used for the tinyint type based on the value given.
# File lib/sequel/adapters/mysql.rb, line 151 151: def convert_tinyint_to_bool=(v) 152: @conversion_procs[1] = TYPE_TRANSLATOR.method(v ? :boolean : :integer) 153: @convert_tinyint_to_bool = v 154: end
Dump foreign key constraints for all tables as a migration. This complements the :foreign_keys=>false option to dump_schema_migration. This only dumps the constraints (not the columns) using alter_table/add_foreign_key with an array of columns.
Note that the migration this produces does not have a down block, so you cannot reverse it.
# File lib/sequel/extensions/schema_dumper.rb, line 20 20: def dump_foreign_key_migration(options={}) 21: ts = tables(options) 22: "Sequel.migration do\n change do\n\#{ts.sort_by{|t| t.to_s}.map{|t| dump_table_foreign_keys(t)}.reject{|x| x == ''}.join(\"\\n\\n\").gsub(/^/o, ' ')}\n end\nend\n" 23: end
Dump indexes for all tables as a migration. This complements the :indexes=>false option to dump_schema_migration. Options:
:same_db : | Create a dump for the same database type, so don‘t ignore errors if the index statements fail. |
:index_names : | If set to false, don‘t record names of indexes. If set to :namespace, prepend the table name to the index name if the database does not use a global index namespace. |
# File lib/sequel/extensions/schema_dumper.rb, line 39 39: def dump_indexes_migration(options={}) 40: ts = tables(options) 41: "Sequel.migration do\n change do\n\#{ts.sort_by{|t| t.to_s}.map{|t| dump_table_indexes(t, :add_index, options)}.reject{|x| x == ''}.join(\"\\n\\n\").gsub(/^/o, ' ')}\n end\nend\n" 42: end
Return a string that contains a Sequel::Migration subclass that when run would recreate the database structure. Options:
:same_db : | Don‘t attempt to translate database types to ruby types. If this isn‘t set to true, all database types will be translated to ruby types, but there is no guarantee that the migration generated will yield the same type. Without this set, types that aren‘t recognized will be translated to a string-like type. |
:foreign_keys : | If set to false, don‘t dump foreign_keys (they can be |
added later via #dump_foreign_key_migration)
:indexes : | If set to false, don‘t dump indexes (they can be added later via dump_index_migration). |
:index_names : | If set to false, don‘t record names of indexes. If set to :namespace, prepend the table name to the index name. |
# File lib/sequel/extensions/schema_dumper.rb, line 64 64: def dump_schema_migration(options={}) 65: options = options.dup 66: if options[:indexes] == false && !options.has_key?(:foreign_keys) 67: # Unless foreign_keys option is specifically set, disable if indexes 68: # are disabled, as foreign keys that point to non-primary keys rely 69: # on unique indexes being created first 70: options[:foreign_keys] = false 71: end 72: 73: ts = sort_dumped_tables(tables(options), options) 74: skipped_fks = if sfk = options[:skipped_foreign_keys] 75: # Handle skipped foreign keys by adding them at the end via 76: # alter_table/add_foreign_key. Note that skipped foreign keys 77: # probably result in a broken down migration. 78: sfka = sfk.sort_by{|table, fks| table.to_s}.map{|table, fks| dump_add_fk_constraints(table, fks.values)} 79: sfka.join("\n\n").gsub(/^/o, ' ') unless sfka.empty? 80: end 81: 82: "Sequel.migration do\n change do\n\#{ts.map{|t| dump_table_schema(t, options)}.join(\"\\n\\n\").gsub(/^/o, ' ')}\#{\"\\n \\n\" if skipped_fks}\#{skipped_fks}\n end\nend\n" 83: end
Return a string with a create table block that will recreate the given table‘s schema. Takes the same options as dump_schema_migration.
# File lib/sequel/extensions/schema_dumper.rb, line 94 94: def dump_table_schema(table, options={}) 95: table = table.value.to_s if table.is_a?(SQL::Identifier) 96: gen = dump_table_generator(table, options) 97: commands = [gen.dump_columns, gen.dump_constraints, gen.dump_indexes].reject{|x| x == ''}.join("\n\n") 98: "create_table(#{table.inspect}#{', :ignore_index_errors=>true' if !options[:same_db] && options[:indexes] != false && !gen.indexes.empty?}) do\n#{commands.gsub(/^/o, ' ')}\nend" 99: end
Execute the given SQL. If a block is given, if should be a SELECT statement or something else that returns rows.
# File lib/sequel/adapters/jdbc.rb, line 239 239: def execute(sql, opts={}, &block) 240: return call_sproc(sql, opts, &block) if opts[:sproc] 241: return execute_prepared_statement(sql, opts, &block) if [Symbol, Dataset].any?{|c| sql.is_a?(c)} 242: synchronize(opts[:server]) do |conn| 243: statement(conn) do |stmt| 244: if block 245: yield log_yield(sql){stmt.executeQuery(sql)} 246: else 247: case opts[:type] 248: when :ddl 249: log_yield(sql){stmt.execute(sql)} 250: when :insert 251: log_yield(sql) do 252: if requires_return_generated_keys? 253: stmt.executeUpdate(sql, JavaSQL::Statement.RETURN_GENERATED_KEYS) 254: else 255: stmt.executeUpdate(sql) 256: end 257: end 258: last_insert_id(conn, opts.merge(:stmt=>stmt)) 259: else 260: log_yield(sql){stmt.executeUpdate(sql)} 261: end 262: end 263: end 264: end 265: end
Execute the given SQL. If a block is given, the DataObjects::Reader created is yielded to it. A block should not be provided unless a a SELECT statement is being used (or something else that returns rows). Otherwise, the return value is the insert id if opts[:type] is :insert, or the number of affected rows, otherwise.
# File lib/sequel/adapters/do.rb, line 69 69: def execute(sql, opts={}) 70: synchronize(opts[:server]) do |conn| 71: begin 72: command = conn.create_command(sql) 73: res = log_yield(sql){block_given? ? command.execute_reader : command.execute_non_query} 74: rescue ::DataObjects::Error => e 75: raise_error(e) 76: end 77: if block_given? 78: begin 79: yield(res) 80: ensure 81: res.close if res 82: end 83: elsif opts[:type] == :insert 84: res.insert_id 85: else 86: res.affected_rows 87: end 88: end 89: end
Execute the given SQL, yielding a Swift::Result if a block is given.
# File lib/sequel/adapters/swift.rb, line 66 66: def execute(sql, opts={}) 67: synchronize(opts[:server]) do |conn| 68: begin 69: res = log_yield(sql){conn.execute(sql)} 70: yield res if block_given? 71: nil 72: rescue ::Swift::Error => e 73: raise_error(e) 74: end 75: end 76: end
Drop any prepared statements on the connection when executing DDL. This is because prepared statements lock the table in such a way that you can‘t drop or alter the table while a prepared statement that references it still exists.
# File lib/sequel/adapters/sqlite.rb, line 134 134: def execute_ddl(sql, opts={}) 135: synchronize(opts[:server]) do |conn| 136: conn.prepared_statements.values.each{|cps, s| cps.close} 137: conn.prepared_statements.clear 138: super 139: end 140: end
Return the number of matched rows when executing a delete/update statement.
# File lib/sequel/adapters/mysql.rb, line 157 157: def execute_dui(sql, opts={}) 158: execute(sql, opts){|c| return affected_rows(c)} 159: end
Execute the SQL on the this database, returning the number of affected rows.
# File lib/sequel/adapters/swift.rb, line 80 80: def execute_dui(sql, opts={}) 81: synchronize(opts[:server]) do |conn| 82: begin 83: log_yield(sql){conn.execute(sql).affected_rows} 84: rescue ::Swift::Error => e 85: raise_error(e) 86: end 87: end 88: end
Execute the SQL on this database, returning the primary key of the table being inserted to.
# File lib/sequel/adapters/swift.rb, line 92 92: def execute_insert(sql, opts={}) 93: synchronize(opts[:server]) do |conn| 94: begin 95: log_yield(sql){conn.execute(sql).insert_id} 96: rescue ::Swift::Error => e 97: raise_error(e) 98: end 99: end 100: end
Return the last inserted id when executing an insert statement.
# File lib/sequel/adapters/mysql.rb, line 162 162: def execute_insert(sql, opts={}) 163: execute(sql, opts){|c| return c.insert_id} 164: end
Load an extension into the receiver. In addition to requiring the extension file, this also modifies the database to work with the extension (usually extending it with a module defined in the extension file). If no related extension file exists or the extension does not have specific support for Database objects, an Error will be raised. Returns self.
# File lib/sequel/database/misc.rb, line 137 137: def extension(*exts) 138: Sequel.extension(*exts) 139: exts.each do |ext| 140: if pr = Sequel.synchronize{EXTENSIONS[ext]} 141: pr.call(self) 142: else 143: raise(Error, "Extension #{ext} does not have specific support handling individual databases") 144: end 145: end 146: self 147: end
Convert the given timestamp from the application‘s timezone, to the databases‘s timezone or the default database timezone if the database does not have a timezone.
# File lib/sequel/database/misc.rb, line 152 152: def from_application_timestamp(v) 153: Sequel.convert_output_timestamp(v, timezone) 154: end
Return true if already in a transaction given the options, false otherwise. Respects the :server option for selecting a shard.
# File lib/sequel/database/misc.rb, line 165 165: def in_transaction?(opts={}) 166: synchronize(opts[:server]){|conn| !!_trans(conn)} 167: end
Use the JDBC metadata to get the index information for the table.
# File lib/sequel/adapters/jdbc.rb, line 281 281: def indexes(table, opts={}) 282: m = output_identifier_meth 283: im = input_identifier_meth 284: schema, table = schema_and_table(table) 285: schema ||= opts[:schema] 286: schema = im.call(schema) if schema 287: table = im.call(table) 288: indexes = {} 289: metadata(:getIndexInfo, nil, schema, table, false, true) do |r| 290: next unless name = r[:column_name] 291: next if respond_to?(:primary_key_index_re, true) and r[:index_name] =~ primary_key_index_re 292: i = indexes[m.call(r[:index_name])] ||= {:columns=>[], :unique=>[false, 0].include?(r[:non_unique])} 293: i[:columns] << m.call(name) 294: end 295: indexes 296: end
Returns a string representation of the database object including the class name and connection URI and options used when connecting (if any).
# File lib/sequel/database/misc.rb, line 171 171: def inspect 172: a = [] 173: a << uri.inspect if uri 174: if (oo = opts[:orig_opts]) && !oo.empty? 175: a << oo.inspect 176: end 177: "#<#{self.class}: #{a.join(' ')}>" 178: end
Whether or not JNDI is being used for this connection.
# File lib/sequel/adapters/jdbc.rb, line 299 299: def jndi? 300: !!(uri =~ JNDI_URI_REGEXP) 301: end
Synchronize access to the prepared statements cache.
# File lib/sequel/database/misc.rb, line 190 190: def prepared_statement(name) 191: Sequel.synchronize{prepared_statements[name]} 192: end
Default serial primary key options, used by the table creation code.
# File lib/sequel/database/misc.rb, line 196 196: def serial_primary_key_options 197: {:primary_key => true, :type => Integer, :auto_increment => true} 198: end
Cache the prepared statement object at the given name.
# File lib/sequel/database/misc.rb, line 201 201: def set_prepared_statement(name, ps) 202: Sequel.synchronize{prepared_statements[name] = ps} 203: end
Return the subadapter type for this database, i.e. sqlite3 for do:sqlite3::memory:.
# File lib/sequel/adapters/do.rb, line 105 105: def subadapter 106: uri.split(":").first 107: end
Whether the database supports CREATE TABLE IF NOT EXISTS syntax, false by default.
# File lib/sequel/database/misc.rb, line 207 207: def supports_create_table_if_not_exists? 208: false 209: end
Whether the database supports DROP TABLE IF EXISTS syntax, default is the same as supports_create_table_if_not_exists?.
# File lib/sequel/database/misc.rb, line 213 213: def supports_drop_table_if_exists? 214: supports_create_table_if_not_exists? 215: end
Whether the database and adapter support prepared transactions (two-phase commit), false by default.
# File lib/sequel/database/misc.rb, line 219 219: def supports_prepared_transactions? 220: false 221: end
Whether the database and adapter support savepoints, false by default.
# File lib/sequel/database/misc.rb, line 224 224: def supports_savepoints? 225: false 226: end
Whether the database and adapter support savepoints inside prepared transactions (two-phase commit), default is false.
# File lib/sequel/database/misc.rb, line 230 230: def supports_savepoints_in_prepared_transactions? 231: supports_prepared_transactions? && supports_savepoints? 232: end
Whether the database and adapter support transaction isolation levels, false by default.
# File lib/sequel/database/misc.rb, line 235 235: def supports_transaction_isolation_levels? 236: false 237: end
Whether DDL statements work correctly in transactions, false by default.
# File lib/sequel/database/misc.rb, line 240 240: def supports_transactional_ddl? 241: false 242: end
Handle Integer and Float arguments, since SQLite can store timestamps as integers and floats.
# File lib/sequel/adapters/sqlite.rb, line 148 148: def to_application_timestamp(s) 149: case s 150: when String 151: super 152: when Integer 153: super(Time.at(s).to_s) 154: when Float 155: super(DateTime.jd(s).to_s) 156: else 157: raise Sequel::Error, "unhandled type when converting to : #{s.inspect} (#{s.class.inspect})" 158: end 159: end
Typecast the value to the given column_type. Calls typecast_value_#{column_type} if the method exists, otherwise returns the value. This method should raise Sequel::InvalidValue if assigned value is invalid.
# File lib/sequel/database/misc.rb, line 261 261: def typecast_value(column_type, value) 262: return nil if value.nil? 263: meth = "typecast_value_#{column_type}" 264: begin 265: respond_to?(meth, true) ? send(meth, value) : value 266: rescue ArgumentError, TypeError => e 267: raise Sequel.convert_exception_class(e, InvalidValue) 268: end 269: end
The uri for this connection. You can specify the uri using the :uri, :url, or :database options. You don‘t need to worry about this if you use Sequel.connect with the JDBC connectrion strings.
# File lib/sequel/adapters/jdbc.rb, line 312 312: def uri(opts={}) 313: opts = @opts.merge(opts) 314: ur = opts[:uri] || opts[:url] || opts[:database] 315: ur =~ /^\Ajdbc:/ ? ur : "jdbc:#{ur}" 316: end
Return the DataObjects URI for the Sequel URI, removing the do: prefix.
# File lib/sequel/adapters/do.rb, line 111 111: def uri(opts={}) 112: opts = @opts.merge(opts) 113: (opts[:uri] || opts[:url]).sub(/\Ado:/, '') 114: end
This methods generally execute SQL code on the database server.