
Welcome to Doma 2¶
Doma 2 is a database access framework for Java 8+. Doma has various strengths:
- Verifies and generates source code at compile time using annotation processing.
- Maps database columns to user-defined Java objects.
- Uses SQL templates, called “two-way SQL”.
- Supports classes introduced in Java 8, such as
java.time.LocalDate
,java.util.Optional
, andjava.util.stream.Stream
. - Has no dependence on other libraries.
This document consists of following sections:
User Documentation¶
Get started!(Eclipse)¶
Contents
Summary¶
Introduce how to setting up development environment and how to executing basic database access.
Install Eclipse¶
You install Eclipse .
Note
Running on Eclipse IDE for Java EE Developers and so on other but checking of running by Eclipse Standard 4.4 in this document. It seems to that it is running at higher version.
Install Doma Tools that is Eclipse plugin¶
Doma tool is plugin that enable mutual transition between Java file and SQL file. This plugin is not required to using Doma, but if you use this plugin then productivity is growth.
You select Help > Install New Software… from menu bar and input next url to ‘Work With’ textbox.
http://dl.bintray.com/domaframework/eclipse/
Install enable plugin candidate is shown like below then you check to Doma tool latest version and go on dialog to finish installing.

Associate to file¶
Doma tools execute annotation processing by hook the file updating. In order to do that , you need to open SQL file in Eclipse.
You select Eclipse > Preference… or Window > Preference from menu bar and open preference dialog.
You associate file that has .sql
extensions to Text Editor like shown below figure.

Similarly you associate file that has .script
extensions to Text Editor.

Note
You can skip this setting if you use Eclipse IDE for Java EE Developers because SQL file is associated to specialized editor by default.
Note
We recommend to you development style that you create SQL by RDBMS specific tools (Oracle SQL Developer and pgAdmin) and copy accomplished SQL to Eclipse editor.
Import template project¶
You clone simple-boilerplate from GitHub.
$ git clone https://github.com/domaframework/simple-boilerplate.git
Move to the cloned directory.
$ cd simple-boilerplate
Create config file for Eclipse by next command.
$ ./gradlew eclipse
Note
You input gradlew eclipse
instead of ./gradlew eclipse
in Windows environment.
Note
Please set JDK 8 (or JDK 9 and 10) installed directory to environment variable JAVA_HOME
.
It is needed for executing gradlew.
Note
The config that is for annotation processing config is included in Eclipse config file. Reference Build with Eclipse if configure by manual.
You select File > Import… from Eclipse menu bar and select ‘Existing Projects into Workspace’ and import simple-boilerplate.

You select project and execute JUnit for confirming the accomplished the importing. If one test case is success then importing was finished normally.
Structure of template project¶
The project source code’s structure is like next.
─ src
├── main
│ ├── java
│ │ └── boilerplate
│ │ ├── AppConfig.java
│ │ ├── dao
│ │ │ ├── AppDao.java
│ │ │ └── EmployeeDao.java
│ │ └── entity
│ │ └── Employee.java
│ └── resources
│ └── META-INF
│ └── boilerplate
│ └── dao
│ ├── AppDao
│ │ ├── create.script
│ │ └── drop.script
│ └── EmployeeDao
│ ├── selectAll.sql
│ └── selectById.sql
└── test
├── java
│ └── boilerplate
│ ├── DbResource.java
│ └── dao
│ └── EmployeeDaoTest.java
└── resources
Explain about important file.
- AppConfig.java
- The Configuration that is needed for executing Doma.
- AppDao.java
- Utility that create/drop the database schema that is using in this application.
This is not need in production environment.
The script file is under
META-INF/boilerplate/dao/AppDao/
and is used for creating and dropping schema. - Employee.java
- The Entity classes that correspond to EMPLOYEE table within database.
- EmployeeDao.java
- The Dao interfaces that is execute getting and updating
Employee
class. The SQL file is underMETA-INF/boilerplate/dao/EmployeeDao/
and is used. - EmployeeDaoTest.java
- The test that is using
EmployeeDao
. You can learn about Doma by adding test case to this file. Other test is not affected by updating data because database schema is created and disposed per test method.
Mutual transition between Java file and SQL file¶
EmployeeDao.java
is defined like next.
@Dao(config = AppConfig.class)
public interface EmployeeDao {
@Select
List<Employee> selectAll();
@Select
Employee selectById(Integer id);
@Insert
int insert(Employee employee);
@Update
int update(Employee employee);
@Delete
int delete(Employee employee);
}
You move cursor to selectById
method and do right click at Eclipse editor and show context menu.
You can transition to META-INF/boilerplate/dao/EmployeeDao/selectById.sql
file by selecting Doma > Jum to SQL in menu.
Next, you put cursor to arbitrary place in META-INF/boilerplate/dao/EmployeeDao/selectById.sql
file and show context menu.
You can back to EmployeeDao.java
file by selecting Doma > Jump to Java in menu.
SQL File¶
You open META-INF/boilerplate/dao/EmployeeDao/selectById.sql
file.
This file is described like next.
select
/*%expand*/*
from
employee
where
id = /* id */0
The /*%expand*/
show that expansioning column list by referencing entity class that is mapped at Java method.
The /* id */
show that Java method parameter value is binding to this SQL.
The 0
that is placed at behind is test data.
By including this test data, you can confirm easily that there is not mistake in SQL at executing by tool.
Test data is not used at executing Java program.
About detail you reference SQL templates.
Search¶
You call Dao method that is annotated @Select
for executing Search process.
Add searching process¶
Show how to adding process that searching young employee than arbitrary age.
You add next program code to EmployeeDao
.
@Select
List<Employee> selectByAge(Integer age);
At this time, next error message is shown on Eclipse by annotation process.
[DOMA4019] The file[META-INF/boilerplate/dao/EmployeeDao/selectByAge.sql] is is not found from the classpath.
You move cursor to selectByAge
method and show context menu by doing right click,
and you select Doma > Jump to SQL in menu.
The dialog that is for creating SQL file is show like next.

You push ‘Finish’ and create file.
After creating file, you save the file that state is empty and back to EmployeeDao
then error message is changed.
[DOMA4020] The SQL template is empty. PATH=[META-INF/boilerplate/dao/EmployeeDao/selectByAge.sql].
You back to selectByAge.sql
file and describe next SQL.
select
/*%expand*/*
from
employee
where
age < /* age */0
Then error is resolved.
Execute searching process¶
Actually execute the created searching process at the above.
You add next code to EmployeeDaoTest
.
@Test
public void testSelectByAge() {
TransactionManager tm = AppConfig.singleton().getTransactionManager();
tm.required(() -> {
List<Employee> employees = dao.selectByAge(35);
assertEquals(2, employees.size());
});
}
You execute JUnit and confirm that this code is run.
At that time, created for the searching SQL is next.
select
age, id, name, version
from
employee
where
age < 35
Insert¶
For executing Insert process, you call Dao method that is annotated @Insert
annotation.
Execute insert process¶
You confirm that next code is exists at EmployeeDao
.
@Insert
int insert(Employee employee);
Execute insert process by using this code.
You add next code to EmployeeDaoTest
.
@Test
public void testInsert() {
TransactionManager tm = AppConfig.singleton().getTransactionManager();
Employee employee = new Employee();
// First transaction
// Execute inserting
tm.required(() -> {
employee.name = "HOGE";
employee.age = 20;
dao.insert(employee);
assertNotNull(employee.id);
});
// Second transaction
// Confirm that inserting is success
tm.required(() -> {
Employee employee2 = dao.selectById(employee.id);
assertEquals("HOGE", employee2.name);
assertEquals(Integer.valueOf(20), employee2.age);
assertEquals(Integer.valueOf(1), employee2.version);
});
}
You execute JUnit and confirm that this code is run.
At that time, created for the inserting SQL is next.
insert into Employee (age, id, name, version) values (20, 100, 'HOGE', 1)
Identifier and version number is automatically setting.
Update¶
For executing Update process, you call Dao method that is annotated @Update
annotation.
Execute update process¶
You confirm that next code is exists at EmployeeDao
.
@Update
int update(Employee employee);
Execute update process by using this code.
You add next code to EmployeeDaoTest
.
@Test
public void testUpdate() {
TransactionManager tm = AppConfig.singleton().getTransactionManager();
// First transaction
// Search and update age field
tm.required(() -> {
Employee employee = dao.selectById(1);
assertEquals("ALLEN", employee.name);
assertEquals(Integer.valueOf(30), employee.age);
assertEquals(Integer.valueOf(0), employee.version);
employee.age = 50;
dao.update(employee);
assertEquals(Integer.valueOf(1), employee.version);
});
// Second transaction
// Confirm that updating is success
tm.required(() -> {
Employee employee = dao.selectById(1);
assertEquals("ALLEN", employee.name);
assertEquals(Integer.valueOf(50), employee.age);
assertEquals(Integer.valueOf(1), employee.version);
});
}
You execute JUnit and confirm that this code is run.
At that time, created for the updating SQL is next.
update Employee set age = 50, name = 'ALLEN', version = 0 + 1 where id = 1 and version = 0
The version number that is for optimistic concurrency control is automatically increment.
Delete¶
For executing Delete process, you call Dao method that is annotated @Delete
annotation.
Execute delete process¶
You confirm that next code is exists at EmployeeDao
.
@Delete
int delete(Employee employee);
Execute delete process by using this code.
You add next code to EmployeeDaoTest
.
@Test
public void testDelete() {
TransactionManager tm = AppConfig.singleton().getTransactionManager();
// First transaction
// Execute deleting
tm.required(() -> {
Employee employee = dao.selectById(1);
dao.delete(employee);
});
// Second transaction
// Confirm that deleting is success
tm.required(() -> {
Employee employee = dao.selectById(1);
assertNull(employee);
});
}
You execute JUnit and confirm that this code is run.
At that time, created for the deleting SQL is next.
delete from Employee where id = 1 and version = 0
Identifier and version number is specified in search condition.
Get started!(IntelliJ IDEA)¶
Contents
Summary¶
Introduce how to setting up development environment and how to executing basic database access.
Install IntelliJ IDEA¶
You install IntelliJ IDEA Community Edition .
Note
In this document, using IntelliJ IDEA Community Edition as IDE. If you use IntelliJ IDEA Ultimate Edition, recommend to you concomitant use with IntelliJ Doma support plugin.
Import template project¶
You clone simple-boilerplate from GitHub.
$ git clone https://github.com/domaframework/simple-boilerplate.git
You start IntelliJ IDEA and execute Import Project, select the cloned simple-boilerplate.

You check Import project from external model and select Gradle.

You check Use auto-import and uncheck Create separate module per source set. Finally you push Finish.

If the message that is synced successfully is shown at Build tool window then import is success.
Setting of Annotation Processor¶
You select Open Module Settings from context menu on tool window.

You open setting screen of Paths of Modules and confirm that Inherit project compile output path is selected. If it is not selected, you select it.

You open Build, Execution, Deployment > Compiler > Annotation Processors from Preferences. You check Enable annotation processing. You check Module content root. You input gen/production to Production sources directory and input gen/test to Test sources directory. Finally push OK.

You execute Build Project from menu. By build, the code that is generated by Annotation Processor is output to gen/production. You select Mark Directory as | Generated Sources Root from context menu on Project tool window and add gen/production to source path.

Structure of template project¶
The project source code’s structure is like next.
─ src
├── main
│ ├── java
│ │ └── boilerplate
│ │ ├── AppConfig.java
│ │ ├── dao
│ │ │ ├── AppDao.java
│ │ │ └── EmployeeDao.java
│ │ └── entity
│ │ └── Employee.java
│ └── resources
│ └── META-INF
│ └── boilerplate
│ └── dao
│ ├── AppDao
│ │ ├── create.script
│ │ └── drop.script
│ └── EmployeeDao
│ ├── selectAll.sql
│ └── selectById.sql
└── test
├── java
│ └── boilerplate
│ ├── DbResource.java
│ └── dao
│ └── EmployeeDaoTest.java
└── resources
Explain about important file.
- AppConfig.java
- The Configuration that is needed for executing Doma.
- AppDao.java
- Utility that create/drop the database schema that is using in this application.
This is not need in production environment.
The script file is under
META-INF/boilerplate/dao/AppDao/
and is used for creating and dropping schema. - Employee.java
- The Entity classes that correspond to EMPLOYEE table within database.
- EmployeeDao.java
- The Dao interfaces that is execute getting and updating
Employee
class. The SQL file is underMETA-INF/boilerplate/dao/EmployeeDao/
and is used. - EmployeeDaoTest.java
- The test that is using
EmployeeDao
. You can learn about Doma by adding test case to this file. Other test is not affected by updating data because database schema is created and disposed per test method.
SQL file¶
You open META-INF/boilerplate/dao/EmployeeDao/selectById.sql
file.
This file is described like next.
select
/*%expand*/*
from
employee
where
id = /* id */0
The /*%expand*/
show that expansioning column list by referencing entity class that is mapped at Java method.
The /* id */
show that Java method parameter value is binding to this SQL.
The 0
that is placed at behind is test data.
By including this test data, you can confirm easily that there is not mistake in SQL at executing by tool.
Test data is not used at executing Java program.
About detail you reference SQL templates.
Search¶
You call Dao method that is annotated @Select
for executing Search process.
Add searching process¶
Show how to adding process that searching young employee than arbitrary age.
You add next program code to EmployeeDao
and execute build.
@Select
List<Employee> selectByAge(Integer age);
At this time, next error message is shown on Message tool window by annotation process.
[DOMA4019] The file[META-INF/boilerplate/dao/EmployeeDao/selectByAge.sql] is is not found from the classpath.
You create file that name is selectByAge.sql to under src/main/resources/META-INF/boilerplate/dao/EmployeeDao
and rebuild by keeping empty file.
Error message content will change.
[DOMA4020] The SQL template is empty. PATH=[META-INF/boilerplate/dao/EmployeeDao/selectByAge.sql].
You back to selectByAge.sql
file and describe next SQL.
select
/*%expand*/*
from
employee
where
age < /* age */0
You rebuild then error is resolved.
Execute searching process¶
Actually execute the created searching process at the above.
You add next code to EmployeeDaoTest
.
@Test
public void testSelectByAge() {
TransactionManager tm = AppConfig.singleton().getTransactionManager();
tm.required(() -> {
List<Employee> employees = dao.selectByAge(35);
assertEquals(2, employees.size());
});
}
You execute JUnit and confirm that this code is run.
At that time, created for the searching SQL is next.
select
age, id, name, version
from
employee
where
age < 35
Insert¶
For executing Insert process, you call Dao method that is annotated @Insert
annotation.
Execute insert process¶
You confirm that next code is exists at EmployeeDao
.
@Insert
int insert(Employee employee);
Execute insert process by using this code.
You add next code to EmployeeDaoTest
.
@Test
public void testInsert() {
TransactionManager tm = AppConfig.singleton().getTransactionManager();
Employee employee = new Employee();
// First transaction
// Execute inserting
tm.required(() -> {
employee.name = "HOGE";
employee.age = 20;
dao.insert(employee);
assertNotNull(employee.id);
});
// Second transaction
// Confirm that inserting is success
tm.required(() -> {
Employee employee2 = dao.selectById(employee.id);
assertEquals("HOGE", employee2.name);
assertEquals(Integer.valueOf(20), employee2.age);
assertEquals(Integer.valueOf(1), employee2.version);
});
}
You execute JUnit and confirm that this code is run.
At that time, created for the inserting SQL is next.
insert into Employee (age, id, name, version) values (20, 100, 'HOGE', 1)
Identifier and version number is automatically setting.
Update¶
For executing Update process, you call Dao method that is annotated @Update
annotation.
Execute update process¶
You confirm that next code is exists at EmployeeDao
.
@Update
int update(Employee employee);
Execute update process by using this code.
You add next code to EmployeeDaoTest
.
@Test
public void testUpdate() {
TransactionManager tm = AppConfig.singleton().getTransactionManager();
// First transaction
// Search and update age field
tm.required(() -> {
Employee employee = dao.selectById(1);
assertEquals("ALLEN", employee.name);
assertEquals(Integer.valueOf(30), employee.age);
assertEquals(Integer.valueOf(0), employee.version);
employee.age = 50;
dao.update(employee);
assertEquals(Integer.valueOf(1), employee.version);
});
// Second transaction
// Confirm that updating is success
tm.required(() -> {
Employee employee = dao.selectById(1);
assertEquals("ALLEN", employee.name);
assertEquals(Integer.valueOf(50), employee.age);
assertEquals(Integer.valueOf(1), employee.version);
});
}
You execute JUnit and confirm that this code is run.
At that time, created for the updating SQL is next.
update Employee set age = 50, name = 'ALLEN', version = 0 + 1 where id = 1 and version = 0
The version number that is for optimistic concurrency control is automatically increment.
Delete¶
For executing Delete process, you call Dao method that is annotated @Delete
annotation.
Execute delete process¶
You confirm that next code is exists at EmployeeDao
.
@Delete
int delete(Employee employee);
Execute delete process by using this code.
You add next code to EmployeeDaoTest
.
@Test
public void testDelete() {
TransactionManager tm = AppConfig.singleton().getTransactionManager();
// First transaction
// Execute deleting
tm.required(() -> {
Employee employee = dao.selectById(1);
dao.delete(employee);
});
// Second transaction
// Confirm that deleting is success
tm.required(() -> {
Employee employee = dao.selectById(1);
assertNull(employee);
});
}
You execute JUnit and confirm that this code is run.
At that time, created for the deleting SQL is next.
delete from Employee where id = 1 and version = 0
Identifier and version number is specified in search condition.
Configuration¶
Contents
- Configuration
- Configurable items
- DataSource
- DataSource’s name
- SQL dialect
- Logger
- SQL File Repository
- Controlling REQUIRES_NEW transaction
- Loading classes
- Choosing SQL format contained in exception messages
- Handling unknown columns
- Naming convention for tables and columns
- Naming convention for keys of java.util.Map
- Local transaction manager
- Adding SQL identifiers to the SQLs as a comment
- Command implementors
- Query implementors
- Query timeout
- Max rows
- Fetch size
- Batch size
- Providing entity listeners
- Loading JDBC drivers
- Configuration definition
- Configurable items
The configurable items must be returned from the methods of the implementation class of
the org.seasar.doma.jdbc.Confing
interface.
Configurable items¶
DataSource¶
Return a JDBC DataSource
from the getDataSource
method.
If you need local transactions provided by Doma, return a LocalTransactionDataSource
.
See also: Transaction
Note
Required item
DataSource’s name¶
Return a DataSource’s name from the getDataSourceName
method.
In the environment where multiple DataSources are used, the name is important.
You have to give an unique name to each DataSource.
The default value is the full qualified name of the implementation class of Config
.
SQL dialect¶
Return a Dialect
from the getDialect
method.
You have to choose an appropriate dialect for the database you use.
Doma provides following dialects:
Database | Dialect Name | Description |
---|---|---|
DB2 | Db2Dialect | |
H2 Database Engine 1.2.126 | H212126Dialect | H2 Database Engine 1.2.126 |
H2 Database | H2Dialect | H2 Database Engine 1.3.171 and above |
HSQLDB | HsqldbDialect | |
Microsoft SQL Server 2008 | Mssql2008Dialect | Microsoft SQL Server 2008 |
Microsoft SQL Server | MssqlDialect | Microsoft SQL Server 2012 and above |
MySQL | MySqlDialect | |
Oracle Database 11g | Oracle11Dialect | Oracle Database 11g |
Oracle Database | OracleDialect | Oracle Database 12g and above |
PostgreSQL | PostgresDialect | |
SQLite | SqliteDialect |
These dialect are located in the org.seasar.doma.jdbc.dialect
package.
Note
Required item
Logger¶
Return a JdbcLogger
from the getJdbcLogger
method.
Doma provides following JdbcLogger:
- org.seasar.doma.jdbc.UtilLoggingJdbcLogger
The default JdbcLogger is UtilLoggingJdbcLogger which uses java.util.logging
.
SQL File Repository¶
Return a SqlFileRepository
from the getSqlFileRepository
method.
Doma provides following SqlFileRepositories:
- org.seasar.doma.jdbc.GreedyCacheSqlFileRepository
- org.seasar.doma.jdbc.NoCacheSqlFileRepository
The default SqlFileRepository is GreedyCacheSqlFileRepository which caches the result of SQL parsing without limitation.
Controlling REQUIRES_NEW transaction¶
Return a RequiresNewController
from the getRequiresNewController
method.
RequiresNewController may begin new transactions to makes transaction locked time shorter.
This feature is used only when you use @TableGenerator
which generates identities with the table.
The default RequiresNewController does nothing.
Loading classes¶
Return a ClassHelper
from the getClassHelper
method.
When the application server and framework you use loads classes with their specific way, consider to create your own ClassHelper.
The default ClassHelper loads classes with Class#forName
mainly.
Choosing SQL format contained in exception messages¶
Return a SqlLogType
from the getExceptionSqlLogType
.
The default SqlLogType contains the formatted SQL in exception messages.
Handling unknown columns¶
Return a UnknownColumnHandler
from the getUnknownColumnHandler
method.
In result set mappings, if an unknown column to an entity class is found,
the UnknownColumnHandler handles the situation.
The default UnknownColumnHandler throws an UnknownColumnException
.
Naming convention for tables and columns¶
Return a Naming
from the getNaming
method.
The naming element of @Entity
have preference over this value.
When you specify explicit value to the name elements of @Table
and @Column
,
the naming convention is not applied to them.
The default Naming does nothing.
Naming convention for keys of java.util.Map¶
Return a MapKeyNaming
from the getMapKeyNaming
method.
The MapKeyNaming is used when the result set is mapped to java.util.Map<String, Object>
.
The default MapKeyNaming does nothing.
Local transaction manager¶
Return a LocalTransactionManager
from the getTransactionManager
method.
The getTransactionManager
method throws UnsupportedOperationException
as default.
See also: Transaction
Adding SQL identifiers to the SQLs as a comment¶
Return a Commenter
from the getCommenter
method.
Doma provides following commenter:
- org.seasar.doma.jdbc.CallerCommenter
The default Commenter does nothing.
Command implementors¶
Return a CommandImplementors
from the getCommandImplementors
method.
For example, the CommandImplementors provides you a hook to execute JDBC API.
Query implementors¶
Return a QueryImplementors
from the getQueryImplementors
method.
For example, the QueryImplementors provides you a hook to rewrite SQL statements.
Query timeout¶
Return the query timeout (second) from the getQueryTimeout
method.
This value is used as default in Queries.
Fetch size¶
Return the fetch size from the getFetchSize
method.
This value is used as default in Search.
Batch size¶
Return the batch size from the getBatchSize
method.
This value is used as default in バッチ挿入,
Batch update and バッチ削除.
Providing entity listeners¶
Return a EntityListenerProvider
from the getEntityListenerProvider
method.
When you want to get entity listeners from a dependency injection container,
create your own EntityListenerProvider.
The default EntityListenerProvider get the entity listener from the accepted supplier.
Loading JDBC drivers¶
All JDBC drivers are loaded automatically by the service provider mechanism.
Warning
But in the specific environment, the mechanism doesn’t work appropriately. For example, when you use Apache Tomcat, you will find the case. See also: DriverManager, the service provider mechanism and memory leaks
Configuration definition¶
Simple definition¶
The simple definition is appropriate in following cases:
- The configuration instance isn’t managed in the dependency injection container
- Local transactions is used
@SingletonConfig
public class AppConfig implements Config {
private static final AppConfig CONFIG = new AppConfig();
private final Dialect dialect;
private final LocalTransactionDataSource dataSource;
private final TransactionManager transactionManager;
private AppConfig() {
dialect = new H2Dialect();
dataSource = new LocalTransactionDataSource(
"jdbc:h2:mem:tutorial;DB_CLOSE_DELAY=-1", "sa", null);
transactionManager = new LocalTransactionManager(
dataSource.getLocalTransaction(getJdbcLogger()));
}
@Override
public Dialect getDialect() {
return dialect;
}
@Override
public DataSource getDataSource() {
return dataSource;
}
@Override
public TransactionManager getTransactionManager() {
return transactionManager;
}
public static AppConfig singleton() {
return CONFIG;
}
}
Note
Remember to annotate the class with @SingletonConfig
Specify the above class to the config element of @Dao
.
@Dao(config = AppConfig.class)
public interface EmployeeDao {
@Select
Employee selectById(Integer id);
}
Advanced definition¶
The advanced definition is appropriate in following cases:
- The configuration instance is managed as a singleton object in the dependency injection container
- The transaction manager is provided from the application server or framework you use
Suppose the dialect
and the dataSource
are injected by the dependency injection container:
public class AppConfig implements Config {
private Dialect dialect;
private DataSource dataSource;
@Override
public Dialect getDialect() {
return dialect;
}
public void setDialect(Dialect dialect) {
this.dialect = dialect;
}
@Override
public DataSource getDataSource() {
return dataSource;
}
public void setDataSource(DataSource dataSource) {
this.dataSource = dataSource;
}
}
To inject the instance of the above class to your DAO implementation instance,
you have to annotate your DAO interfaces with @AnnotateWith
:
@Dao
@AnnotateWith(annotations = {
@Annotation(target = AnnotationTarget.CONSTRUCTOR, type = javax.inject.Inject.class),
@Annotation(target = AnnotationTarget.CONSTRUCTOR_PARAMETER, type = javax.inject.Named.class, elements = "\"config\"") })
public interface EmployeeDao {
@Select
Employee selectById(Integer id);
}
@Dao
@AnnotateWith(annotations = {
@Annotation(target = AnnotationTarget.CONSTRUCTOR, type = javax.inject.Inject.class),
@Annotation(target = AnnotationTarget.CONSTRUCTOR_PARAMETER, type = javax.inject.Named.class, elements = "\"config\"") })
public interface DepartmentDao {
@Select
Department selectById(Integer id);
}
To avoid annotating your DAO interfaces with @AnnotateWith
repeatedly,
annotate the arbitrary annotation with it only once:
@AnnotateWith(annotations = {
@Annotation(target = AnnotationTarget.CONSTRUCTOR, type = javax.inject.Inject.class),
@Annotation(target = AnnotationTarget.CONSTRUCTOR_PARAMETER, type = javax.inject.Named.class, elements = "\"config\"") })
public @interface InjectConfig {
}
Then, you can annotate your DAO interfaces with the above @InjectConfig
annotation:
@Dao
@InjectConfig
public interface EmployeeDao {
@Select
Employee selectById(Integer id);
}
@Dao
@InjectConfig
public interface DepartmentDao {
@Select
Department selectById(Integer id);
}
Basic classes¶
Contents
List of basic classes¶
- primitive types except
char
- wrapper class for above primitive types
- enum types
- byte[]
- java.lang.String
- java.lang.Object
- java.math.BigDecimal
- java.math.BigInteger
- java.time.LocalDate
- java.time.LocalTime
- java.time.LocalDateTime
- java.sql.Date
- java.sql.Time
- java.sql.Timestamp
- java.sql.Array
- java.sql.Blob
- java.sql.Clob
- java.sql.SQLXML
- java.util.Date
Differences between temporal classes¶
java.time.LocalDate: | |
---|---|
represents SQL DATE | |
java.time.LocalTime: | |
represents SQL TIME | |
java.time.LocalDateTime: | |
represents SQL TIMESTAMP and may hold nanoseconds if RDBMS supports it | |
java.sql.Date: | represents SQL DATE |
java.sql.Time: | represents SQL TIME |
java.sql.Timestamp: | |
represents SQL TIMESTAMP and may hold nanoseconds if RDBMS supports it | |
java.util.Date: | represents SQL TIMESTAMP and doesn’t hold nanoseconds |
Examples¶
Using in entity class¶
@Entity
public class Employee {
@Id
Integer employeeId;
Optional<String> employeeName;
@Version
Long versionNo;
...
}
Using in domain class¶
@Domain(valueType = String.class)
public class PhoneNumber {
private final String value;
public PhoneNumber(String value) {
this.value = value;
}
public String getValue() {
return value;
}
}
Using in DAO interface¶
@Dao(config = AppConfig.class)
public interface EmployeeDao {
@Select
Employee selectById(Integer employeeId);
@Select
List<String> selectAllName();
}
Domain classes¶
Contents
A domain class represents a table column and it allows you to handle the column value as a Java object. In the Doma framework, a domain means all the values which a data type may contain. In short, a domain class is a user defined class that can be map to a column. The use of the domain classes is optional.
Every domain class is either an internal domain class or an external domain class.
Internal domain classes¶
The internal domain class must be annotated with @Domain
.
The @Domain
’s valueType
element corresponds to a data type of a column.
Specify any type of Basic classes to the valueType
element.
Instantiation with a constructor¶
The default value of the @Domain
’s factoryMethod
element is new
.
The value new
means that the object of annotated class is created with a constructor.
@Domain(valueType = String.class)
public class PhoneNumber {
private final String value;
public PhoneNumber(String value) {
this.value = value;
}
public String getValue() {
return value;
}
public String getAreaCode() {
...
}
}
Instantiation with a static factory method¶
To create the object of annotated class with a static factory method,
specify the method name to the @Domain
’s factoryMethod
element.
The method must be static and non-private:
@Domain(valueType = String.class, factoryMethod = "of")
public class PhoneNumber {
private final String value;
private PhoneNumber(String value) {
this.value = value;
}
public String getValue() {
return value;
}
public String getAreaCode() {
...
}
public static PhoneNumber of(String value) {
return new PhoneNumber(value);
}
}
With a static factory method, you can apply the @Domain
annotation to enum types:
@Domain(valueType = String.class, factoryMethod = "of")
public enum JobType {
SALESMAN("10"),
MANAGER("20"),
ANALYST("30"),
PRESIDENT("40"),
CLERK("50");
private final String value;
private JobType(String value) {
this.value = value;
}
public static JobType of(String value) {
for (JobType jobType : JobType.values()) {
if (jobType.value.equals(value)) {
return jobType;
}
}
throw new IllegalArgumentException(value);
}
public String getValue() {
return value;
}
}
Using type parameters in internal domain classes¶
All internal domain class declarations have type parameters:
@Domain(valueType = int.class)
public class Identity<T> {
private final int value;
public Identity(int value) {
this.value = value;
}
public int getValue() {
return value;
}
}
When you create the object of annotated class with a static factory method, the method declaration must have same type parameters that are declared in the class declaration:
@Domain(valueType = int.class, factoryMethod = "of")
public class Identity<T> {
private final int value;
private Identity(int value) {
this.value = value;
}
public int getValue() {
return value;
}
public static <T> Identity<T> of(int value) {
return new Identity<T>(value);
}
}
External domain classes¶
This feature allows you to define arbitrary classes as domain classes,
even if the classes can be annotated with the @Domain
annotation.
To define external domain classes, you have to do as follows:
- Create a class that implements
org.seasar.doma.jdbc.domain.DomainConverter
and annotate@ExternalDomain
to the class - Create a class that is annotated with
@DomainConverters
- Specify the class annotated with
@ExternalDomain
to the@DomainConverters
’svalue
element - Specify the full qualified name of the class annotated with
@DomainConverters
to the option of Annotation processing
Suppose, for instance, there is the PhoneNumber
class that you can change:
public class PhoneNumber {
private final String value;
public PhoneNumber(String value) {
this.value = value;
}
public String getValue() {
return value;
}
public String getAreaCode() {
...
}
}
First, to define the PhoneNumber
class as an external domain class, create following class:
@ExternalDomain
public class PhoneNumberConverter implements DomainConverter<PhoneNumber, String> {
public String fromDomainToValue(PhoneNumber domain) {
return domain.getValue();
}
public PhoneNumber fromValueToDomain(String value) {
if (value == null) {
return null;
}
return new PhoneNumber(value);
}
}
Then create following class and specify the above class to the @DomainConverters
’s value
element:
@DomainConverters({ PhoneNumberConverter.class })
public class DomainConvertersProvider {
}
Finally, specify the full qualified name of the above class to the option of Annotation processing. If you use Gradle, specify the option in the build script as follows:
compileJava {
options {
compilerArgs = ['-Adoma.domain.converters=example.DomainConvertersProvider']
}
}
Using type parameters in external domain classes¶
All external domain class declarations have type parameters:
public class Identity<T> {
private final int value;
public Identity(int value) {
this.value = value;
}
public int getValue() {
return value;
}
}
In the DomainConverter
implementation class,
specify a wildcard ?
as type arguments to the external domain class:
@ExternalDomain
public class IdentityConverter implements DomainConverter<Identity<?>, String> {
public String fromDomainToValue(Identity<?> domain) {
return domain.getValue();
}
@SuppressWarnings("rawtypes")
public Identity<?> fromValueToDomain(String value) {
if (value == null) {
return null;
}
return new Identity(value);
}
}
Example¶
The Domain classes showed above are used as follows:
@Entity
public class Employee {
@Id
Identity<Employee> employeeId;
String employeeName;
PhoneNumber phoneNumber;
JobType jobType;
@Version
Integer versionNo();
...
}
@Dao(config = AppConfig.class)
public interface EmployeeDao {
@Select
Employee selectById(Identity<Employee> employeeId);
@Select
Employee selectByPhoneNumber(PhoneNumber phoneNumber);
@Select
List<PhoneNumber> selectAllPhoneNumber();
@Select
Employee selectByJobType(JobType jobType);
@Select
List<JobType> selectAllJobTypes();
}
Embeddable classes¶
Contents
Embeddable classes group the properties for Entity classes.
Embeddable definition¶
The following code snippet shows how to define an embeddable:
@Embeddable
public class Address {
final String city;
final String street;
@Column(name = "ZIP_CODE")
final String zip;
public Address(String city, String street, String zip) {
this.city = city;
this.street = street;
this.zip = zip;
}
}
The embeddable class is used as the entity field type:
@Entity
public class Employee {
@Id
Integer id;
Address address;
}
The above entity definition is equivalent to following one:
@Entity
public class Employee {
@Id
Integer id;
String city;
String street;
@Column(name = "ZIP_CODE")
String zip;
}
Naming convention¶
A naming convention is inherited from the enclosing Entity classes.
Field definition¶
By default, the fields are persistent and correspond to the database columns or result set columns.
The field type must be one of the following:
- Basic classes
- Domain classes
- java.util.Optional, whose element is either Basic classes or Domain classes
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
@Embeddable
public class Address {
...
String street;
}
Method definition¶
There are no limitations in the use of methods.
Entity classes¶
Contents
Entity classes correspond to database tables or query result sets.
Entity definition¶
The following code snippet shows how to define an entity:
@Entity
public class Employee {
...
}
An entity class can inherit other entity class.
The following code snippet shows how to inherit other entity class:
@Entity
public class SkilledEmployee extends Employee {
...
}
Entity listeners¶
Entity listeners work before/after Doma issues the database modification statements - INSERT, DELETE and UPDATE.
The following code snippet shows how to define an entity listener:
public class EmployeeEntityListener implements EntityListener<Employee> {
...
}
To use the entity listener, specify it to the listener
property within the @Entity
annotation:
@Entity(listener = EmployeeEntityListener.class)
public class Employee {
...
}
An entity subclass inherits parent`s entity listener.
Naming convention¶
Naming convention maps the names between:
- the database tables and the Java entity classes
- the database column and the Java entity fields
The following code snippet shows how to apply the naming convention to an entity:
@Entity(naming = NamingType.SNAKE_UPPER_CASE)
public class EmployeeInfo {
...
}
When the name
property within the @Table
or @Column
annotation is explicitly specified,
the naming convention is ignored.
An entity subclass inherits parent’s naming convention.
Immutable¶
An entity class can be immutable.
The following code snippet shows how to define an immutable entity:
@Entity(immutable = true)
public class Employee {
@Id
final Integer id;
final String name;
@Version
final Integer version;
public Employee(Integer id, String name, Integer version) {
this.id = id;
this.name = name;
this.version = version;
}
...
}
The immutable
property within the @Entity
annotation must be true
.
The persistent field must be final
.
An entity subclass inherits parent’s immutable property.
Table¶
You can specify the corresponding table name with the @Table
annotation:
@Entity
@Table(name = "EMP")
public class Employee {
...
}
Without the @Table
annotation, the table name is resolved by Naming Convention.
Field definition¶
By default, the fields are persistent and correspond to the database columns or result set columns.
The field type must be one of the following:
- Basic classes
- Domain classes
- Embeddable classes
- java.util.Optional, whose element is either Basic classes or Domain classes
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
The following code snippet shows how to define a filed:
@Entity
public class Employee {
...
Integer employeeId;
}
Column¶
You can specify the corresponding column name with the @Column
annotation:
@Column(name = "ENAME")
String employeeName;
To exclude fields from INSERT or UPDATE statements, specify false
to the insertable
or updatable
property within the @Column
annotation:
@Column(insertable = false, updatable = false)
String employeeName;
Without the @Column
annotation, the column name is resolved by Naming Convention.
Note
When the filed type is Embeddable classes, you cannot specify the @Column
annotation to the field.
Id¶
The database primary keys are represented with the @Id
annotation:
@Id
Integer id;
When there is a composite primary key, use the @Id
annotation many times:
@Id
Integer id;
@Id
Integer id2;
Note
When the filed type is Embeddable classes, you cannot specify the @Id
annotation to the field.
Id generation¶
You can instruct Doma to generate id values automatically using the @GeneratedValue
annotation.
The field type must be one of the following:
- the subclass of java.lang.Number
- Domain classes, whose value type is the subclass of java.lang.Number
- java.util.Optional, whose element is either above types
- OptionalInt
- OptionalLong
- OptionalDouble
- the primitive types for number
Note
The generated values are assign to the field only when the field is either null
or less than 0
.
If you use one of the primitive types as filed type,
initialize the field with tha value that is less than 0
, such as -1
.
Id generation by IDENTITY¶
To generate values using the RDBMS IDENTITY function, specify the GenerationType.IDENTITY
enum value
to strategy
property within the @GeneratedValue
:
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
Integer id;
In advance, define the database primary key as IDENTITY.
Warning
All RDBMS does’t support the IDENTITY function.
Id generation by SEQUENCE¶
To generate values using the RDBMS SEQUENCE, specify the GenerationType.SEQUENCE
enum value
to strategy
property within the @GeneratedValue
annotation.
And use the @SequenceGenerator
annotation:
@Id
@GeneratedValue(strategy = GenerationType.SEQUENCE)
@SequenceGenerator(sequence = "EMPLOYEE_SEQ")
Integer id;
In advance, define the SEQUENCE in the database.
The SEQUENCE definitions such as the name, the allocation size and the initial size must
correspond the properties within the @SequenceGenerator
annotation.
Warning
All RDBMS does’t support the SEQUENCE.
Id generation by TABLE¶
To generate values using the RDBMS TABLE, specify the GenerationType.TABLE
enum value
to strategy
property within the @GeneratedValue
annotation.
And use the @TableGenerator
annotation:
@Id
@GeneratedValue(strategy = GenerationType.TABLE)
@TableGenerator(pkColumnValue = "EMPLOYEE_ID")
Integer id;
In advance, define the TABLE in the database.
The TABLE`s definition must correspond to the properties within the @TableGenerator
annotation.
For example, the DDL should be following:
CREATE TABLE ID_GENERATOR(PK VARCHAR(20) NOT NULL PRIMARY KEY, VALUE INTEGER NOT NULL);
You can change the table name and the column names using the properties within the @TableGenerator
annotation.
Version¶
The version fields for optimistic locking are represented with the @Version
annotation.
The field type must be one of the following:
- the subclass of java.lang.Number
- Domain classes, whose value type is the subclass of java.lang.Number
- java.util.Optional, whose element is either above types
- OptionalInt
- OptionalLong
- OptionalDouble
- the primitive types for number
@Version
Integer version;
Note
When the filed type is Embeddable classes, you cannot specify the @Version
annotation to the field.
Tenant Id¶
The tenant id fields are represented with the @TenantId
annotation.
The column corresponding to the annotated field is included in the WHERE clause of UPDATE and DELETE stetements.
@TenantId
String tenantId;
Note
When the filed type is Embeddable classes, you cannot specify the @TenantId
annotation to the field.
Transient¶
If an entity has fields that you don’t want to persist, you can annotate them using @Transient
:
@Transient
List<String> nameList;
OriginalStates¶
If you want to include only changed values in UPDATE statements,
you can define fields annotated with @OriginalStates
.
The fields can hold the original values that were fetched from the database.
Doma uses the values to know which values are changed in the application and includes the only changed values in UPDATE statements.
The following code snippet shows how to define original states:
@OriginalStates
Employee originalStates;
The field type must be the same as the entity type.
Method definition¶
There are no limitations in the use of methods.
Dao interfaces¶
Data Access Object (Dao) is interface for access to database.
Dao definition¶
Dao is defined as interface annotated @Dao
.
Class implemented dao interface is generated in compile time by apt.
Query definition¶
Queries can be defined using annotation.
You use Query builders in default method if you want to build query freely in Java code.
Default method¶
You can write java code freely in default method.
You can get Config
instance associated dao instance if you call Config.get
with argument dao instance.
@Dao(config = AppConfig.class)
public interface EmployeeDao {
default int count() {
Config config = Config.get(this);
SelectBuilder builder = SelectBuilder.newInstance(config);
builder.sql("select count(*) from employee");
return builder.getScalarSingleResult(int.class);
}
}
Example¶
Implementation class is generated by annotation processor on compile. Implementation class is instantiated and used. But if configuration class is managed by DI container then it should be controlled to instantiate implementation class by DI container.
EmployeeDao employeeDao = new EmployeeDaoImpl();
Employee employee = employeeDao.selectById(1);
In default, implementation class name is interface name suffixed with Impl
.
Please refer Annotation processing to change package and suffix.
If you use default constructor then DataSource
is determined by configuration in config
element of @Dao
.
But it can instantiate with DataSource
specified explicitly.
DataSource dataSource = ...;
EmployeeDao employeeDao = new EmployeeDaoImpl(dataSource);
Employee employee = employeeDao.selectById(1);
And also, it can instantiate with Connection
specified explicitly.
Connection connection = ...;
EmployeeDao employeeDao = new EmployeeDaoImpl(connection);
Employee employee = employeeDao.selectById(1);
Dao interface is no need to define as one to one with entity class. One dao interface can handle more than one entity classes.
@Dao(config = AppConfig.class)
public interface MyDao {
@Select
Employee selectEmployeeById(int id);
@Select
Department selectDepartmentByName(String name);
@Update
int updateAddress(Address address);
}
Queries¶
Search¶
Contents
Annotate with @Select
to Dao method for execute search.
@Config(config = AppConfig.class)
public interface EmployeeDao {
@Select
List<Employee> selectByDepartmentName(String departmentName);
...
}
SQL file is required in search. There is no feature that auto generating search SQL.
Note
You need creating entity class depending on search result. For example, result set including column in EMPLOYEE table is accepted Employee class if the Employee class that correspond EMPLOYEE table is declared. But, you need different class from the Employee entity class(For example EmmployeeDepartment class) for result set that is get by joining EMPLOYEE table and DEPARTMENT table.
Query condition¶
You use method parameters for query condition. Available types is below.
- Basic classes
- Domain classes
- Arbitrary type
- Basic classes , Domain classes or arbitrary type are within java.util.Optional
- Basic classes or Domain classes are within java.util.Iterable
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
Parameters count is no limit.
You can set null
to parameter if parameter type is Basic classes or Domain classes.
Parameter must not be null
if the type is other than that.
Query that is used basic type or domain class¶
You declare Basic classes or Domain classes to method or parameter.
@Select
List<Employee> selectByNameAndSalary(String name, Salary salary);
You map method parameter to SQL by using SQL comment in SQL file. In SQL comment, method parameter name is referenced.
select * from employee where employee_name = /* name */'hoge' and salary > /* salary */100
Query that is used arbitrary type¶
You map to SQL by access field or call method there are using by dot .
if using arbitrary parameter type in method parameter.
@Select
List<Employee> selectByExample(Employee employee);
select * from employee where employee_name = /* employee.name */'hoge' and salary > /* employee.getSalary() */100
You can specify multiple parameter.
@Select
List<Employee> selectByEmployeeAndDepartment(Employee employee, Department department);
Mapping to IN clauses by using Iterable.¶
You use subtype of java.lang.Iterable
if excute searching by using IN clauses.
@Select
List<Employee> selectByNames(List<String> names);
select * from employee where employee_name in /* names */('aaa','bbb','ccc')
Single record search¶
You specify method return value type either of below for search single record.
- Basic classes
- Domain classes
- Entity classes
- java.util.Map<String, Object>
- Either Basic classes , Domain classes , Entity classes or java.util.Map<String, Object> is within java.util.Optional
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
@Select
Employee selectByNameAndSalary(String name, BigDecimal salary);
null
is return if return type is not Optional
and result count is 0.
If Ensure of search result is enabled, exception is thrown regardless return value type if search count is 0.
NonUniqueResultException
is thrown if result exists 2 or more.
Multiple record search¶
You specify java.util.List
to method return value type to for search multiple record.
You can use below property in List
.
- Basic classes
- Domain classes
- Entity classes
- java.util.Map<String, Object>
- Either Basic classes or Domain classes is within java.util.Optional
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
@Select
List<Employee> selectByNameAndSalary(String name, Salary salary);
Empty list instead of null
is return if result count is 0.
But if Ensure of search result is enabled, exception is thrown if search count is 0.
Stream search¶
You can use stream search if handle all record at one try as java.util.stream.Stream
rather than recieve as java.util.List
.
There are two kind in stream search such as return the return value and pass Stream
to java.util.Function
.
Pass to the Function¶
You set SelectType.STREAM
to strategy
property within @Select
annotation and
define subtype that is java.util.Function<Stream<TARGET>, RESULT>
or java.util.Function<Stream<TARGET>, RESULT>
to method parameter.
@Select(strategy = SelectType.STREAM)
BigDecimal selectByNameAndSalary(String name, BigDecimal salary, Function<Stream<Employee>, BigDecimal> mapper);
Caller receive stream and pass lambda expression that return result.
EmployeeDao dao = new EmployeeDaoImpl();
BigDecimal result = dao.selectByNameAndSalary(name, salary, stream -> {
return ...;
});
Function<Stream<TARGET>, RESULT>
corresponding type parameter TARGET
must be either of below.
- Basic classes
- Domain classes
- Entity classes
- java.util.Map<String, Object>
- Either Basic classes or Domain classes is within java.util.Optional
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
Type parameter RESULT
must match to Dao method return value.
If Ensure of search result is enabled, exception is thrown if search count is 0.
Return the return value¶
You define java.util.stream.Stream
to method return value.
You can use following type at property within Stream
.
- Basic classes
- Domain classes
- Entity classes
- java.util.Map<String, Object>
- Either Basic classes or Domain classes within java.util.Optional
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
@Select
Stream<Employee> selectByNameAndSalary(String name, BigDecimal salary);
Below is a caller.
EmployeeDao dao = new EmployeeDaoImpl();
try (Stream<Employee> stream = dao.selectByNameAndSalary(name, salary)) {
...
}
If Ensure of search result is enabled, exception is thrown if search count is 0.
Warning
Make sure to close the stream for prevent forgetting of release the resource.
If you do not close the stream, java.sql.ResultSet
or java.sql.PreparedStatement
,
java.sql.Connection
those are not closing.
Note
Consider adoption of pass to Function unless there is some particular reason,
because return the return value has the risk that is forgetting of release the resource.
Doma display warning message at Dao method for attention.
You specify @Suppress
below for suppress warning.
@Select
@Suppress(messages = { Message.DOMA4274 })
Stream<Employee> selectByNameAndSalary(String name, BigDecimal salary);
Collect search¶
You can use collect search if handle result as java.util.Collector
.
You set SelectType.COLLECT
to strategy
property within @Select
annotation and
define subtype that is java.stream.Collector<TARGET, ACCUMULATION, RESULT>
or java.stream.Collector<TARGET, ?, RESULT>
to method parameter.
@Select(strategy = SelectType.COLLECT)
<RESULT> RESULT selectBySalary(BigDecimal salary, Collector<Employee, ?, RESULT> collector);
Caller pass Collector
instance.
EmployeeDao dao = new EmployeeDaoImpl();
Map<Integer, List<Employee>> result =
dao.selectBySalary(salary, Collectors.groupingBy(Employee::getDepartmentId));
Collector<TARGET, ACCUMULATION, RESULT>
corresponding type parameter TARGET
must be either of below.
- Basic classes
- Domain classes
- Entity classes
- java.util.Map<String, Object>
- Either Basic classes or Domain classes within java.util.Optional
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
Type parameter RESULT
must match Dao method return value.
If Ensure of search result is enabled, exception is thrown if search count is 0.
Note
Collect search is the shortcut that pass to Function within stream search.
You can do equivalent by using collect` method in Stream
object that is getting from stream search.
Using search option search¶
You can automatically generate SQL for paging and pessimistic concurrency control from SQL file that is wrote SELECT clauses
by you use SelectOptions
that is represent search option.
You use SelectOptions
in combination with Single record search , Multiple record search , Stream search
You define SelectOptions
as Dao method parameter.
@Config(config = AppConfig.class)
public interface EmployeeDao {
@Select
List<Employee> selectByDepartmentName(String departmentName, SelectOptions options);
...
}
You can get SelectOptions
instance by static get
method.
SelectOptions options = SelectOptions.get();
Paging¶
You specify start position by offset
method and get count by limit
method those are within SelectOptions
,
and pass the SelectOptions
instance to Dao method.
SelectOptions options = SelectOptions.get().offset(5).limit(10);
EmployeeDao dao = new EmployeeDaoImpl();
List<Employee> list = dao.selectByDepartmentName("ACCOUNT", options);
Paging is materialized by rewriting original SQL writing in file and executing. Original SQL must be satisfied condition below.
- SQL is SELECT clauses
- In top level, set operation is not executed like UNION, EXCEPT, INTERSECT.(But using at subquery is able)
- Paging process is not included.
In addition, particular condition must be satisfied according to the database dialect.
If specify offset, there are ORDER BY clauses and all column that is specified at ORDER BY clauses is included in SELECT clauses.
Dialect | Condition |
---|---|
Db2Dialect | If specify offset, there are ORDER BY clauses and all column that is specified at ORDER BY clauses is included in SELECT clauses. |
Mssql2008Dialect | If specify offset, there are ORDER BY clauses and all column that is specified at ORDER BY clauses is included in SELECT clauses. |
MssqlDialect | If specify offset, there are ORDER BY clauses. |
StandardDialect | There are ORDER BY clauses and all column that is specified at ORDER BY clauses is included in SELECT clauses. |
Pessimistic concurrency control¶
You indicate executing pessimistic concurrency control by forUpdate
within SelectOptions
,
and pass the SelectOptions instance to Dao method.
SelectOptions options = SelectOptions.get().forUpdate();
EmployeeDao dao = new EmployeeDaoImpl();
List<Employee> list = dao.selectByDepartmentName("ACCOUNT", options);
The method that name is started forUpdate for pessimistic concurrency control is prepared
such as forUpdateNowait
method that do not wait for getting lock
and forUpdate
method that can specify lock target table or column alias.
Pessimistic concurrency control is executed by rewriting original SQL writing in file. Original SQL must be satisfied condition below.
- SQL is SELECT clauses
- In top level, set operation is not executed like UNION, EXCEPT, INTERSECT.(But using at subquery is able)
- Pessimistic concurrency control process is not included.
Part or all of pessimistic concurrency control method can not used according to the database dialect.
Dialect | Description |
---|---|
Db2Dialect | You can use forUpdate(). |
H2Dialect | You can use forUpdate(). |
HsqldbDialect | You can useforUpdate(). |
Mssql2008Dialect | You can use forUpdate() and forUpdateNoWait(). However, FROM clauses in original SQL must consist single table. |
MysqlDialect | You can use forUpdate() |
OracleDialect | You can use forUpdate(), forUpdate(String… aliases), forUpdateNowait(), forUpdateNowait(String… aliases), forUpdateWait(int waitSeconds), forUpdateWait(int waitSeconds, String… aliases). |
PostgresDialect | You can use forUpdate() and forUpdate(String… aliases). |
StandardDialect | You can not use all of pessimistic concurrency control method. |
Aggregate¶
You can get aggregate count by calling count
method within SelectOptions
.
Usually, you use combination in paging option and use in case of getting all count if not narrowing by paging.
SelectOptions options = SelectOptions.get().offset(5).limit(10).count();
EmployeeDao dao = new EmployeeDaoImpl();
List<Employee> list = dao.selectByDepartmentName("ACCOUNT", options);
long count = options.getCount();
Aggregate count is get by using getCount
method within SelectOptions
after calling Dao method.
The getCount
method is return -1
if you do not execute count
method before calling method.
Ensure of search result¶
You specify true
to ensureResult
property within @Select
annotation if you want to ensure of search result count is over 1.
@Select(ensureResult = true)
Employee selectById(Integer id);
NoResultException
is thrown if search result count is 0.
Ensure of mapping search result¶
You specify true
to ensureResultMapping
property within @Select
annotation,
if you want ensure that mapping result set column to all entity properties without exception.
@Select(ensureResultMapping = true)
Employee selectById(Integer id);
ResultMappingException
is thrown if there are property that is not mapping to result set column.
Query timeout¶
You can specify seconds of query timeout to queryTimeout
property within @Update
annotation.
@Select(queryTimeout = 10)
List<Employee> selectAll();
Query timeout that is specified in Configuration is used if queryTimeout
property is not set value.
Fetch size¶
You can specify fetch size to fetchSize
property within @Select
annotation.
@Select(fetchSize = 20)
List<Employee> selectAll();
Fetch size that is specified in Configuration is used if value is not set.
Max row count¶
You can specify max row count to maxRows
property within @Select
annotation.
@Select(maxRows = 100)
List<Employee> selectAll();
Max row count that is is specified in Configuration is used if value is not set.
Naming rule of map’s key¶
You can specify naming rule of map’s key to mapKeyNaming
property within @Select
annotation,
if you want mapping search result to java.util.Map<String, Object>
.
@Select(mapKeyNaming = MapKeyNamingType.CAMEL_CASE)
List<Map<String, Object>> selectAll();
MapKeyNamingType.CAMEL_CASE
present converting column name to camel case.
In addition to there are rule that converting upper case or lower case.
The final conversion result is decide by value specified here and implementation of MapKeyNaming
is specified at Configuration.
SQL log output format¶
You can specify SQL log output format to sqlLog
property within @Select
annotation.
@Select(sqlLog = SqlLogType.RAW)
List<Employee> selectById(Integer id);
SqlLogType.RAW
represent outputting log that is sql with a binding parameter.
Insert¶
Contents
Annotate with @Insert
to Dao method for execute insert.
@Config(config = AppConfig.class)
public interface EmployeeDao {
@Insert
int insert(Employee employee);
@Insert
Result<ImmutableEmployee> insert(ImmutableEmployee employee);
}
By default insert statement is auto generated.
You can mapping arbitrary SQL file by setting true to sqlFile
element of @Insert
.
Entity listener preInsert
method is called when before executing insert if the entity listener is specified Entity classes parameter.
Also entity listener postInsert
method is called when after executing insert.
Return value¶
Return value must be org.seasar.doma.jdbc.Result
that make the entity class an element if parameter is immutable entity class.
Return value must be int
that is represented updated count if the above conditions are not satisfied.
Insert by auto generated SQL¶
Parameter type must be entity class. Specifiable parameter is only one. Parameter must not be null.
@Insert
int insert(Employee employee);
@Insert
Result<ImmutableEmployee> insert(ImmutableEmployee employee);
Identifier¶
Identifier is auto generated and setting if Entity classes identifier is annotated with @GeneratedValue
.
Reference Id generation about cautionary point.
Version numbers¶
If value that explicitly set is over 0 then use the value if Entity classes has property that is annotated with ``@Version`.
If the value is not set or is less than 0 the value is set 1 automatically.
Control insertion target property¶
insertable¶
Property that is set false to insertable
element of @Column
is excluded from insertion if entity class has property that is annotated with @Column
.
exclude¶
Property that is specified in exclude
element of @Insert
is excluded from insertion.
Even if insertable
element of @Column
is true the property is excluded from insertion if the property is specified by this element.
@Insert(exclude = {"name", "salary"})
int insert(Employee employee);
include¶
Property that is specified in include
element of @Insert
is included to insertion.
If same property are specified in both of include
element and exclude
element of @Insert
the property is excluded from insertion.
Even if property is specified in this element the property is excluded from insertion if insertable
element of @Column
is false.
@Insert(include = {"name", "salary"})
int insert(Employee employee);
excludeNull¶
Property that value is null
is excluded from insertion if excludeNull
element of @Insert
is true.
If this element is true, even if insertable
element of @Column
is true or property is specified in include
element of @Insert
the property is excluded from insertion if value is null
.
@Insert(excludeNull = true)
int insert(Employee employee);
Insert by SQL file¶
To execute insertion by SQL file,
you set true
to sqlFile
element of @Insert
and prepare SQL file that correspond method.
You can use arbitrary type as parameter.
Specifiable parameters count is no limit.
You can set null
to parameter if parameter type is basic type or domain class.
For other type than that, parameter must not be null
.
@Insert(sqlFile = true)
int insert(Employee employee);
@Insert(sqlFile = true)
Result<ImmutableEmployee> insert(ImmutableEmployee employee);
For example, you describe SQL file like below to correspond above method.
insert into employee (id, name, salary, version)
values (/* employee.id */0,
/* employee.name */'hoge',
/* employee.salary */100,
/* employee.version */0)
Identifier auto setting and version value auto setting is not done in insertion by SQL file.
Also, exclude
element and include
element and excludeNull
element of @Insert
are not referenced.
Unique constraint violation¶
UniqueConstraintException
is thrown regardless with or without using sql file if unique constraint violation is occured.
Query timeout¶
You can specify second of query timeout to queryTimeout
element of @Insert
.
@Insert(queryTimeout = 10)
int insert(Employee employee);
This specifying is applied regardless with or without using sql file.
Query timeout that is specified in Configuration is used if queryTimeout
element is not set value.
SQL log output format¶
You can specify SQL log output format to sqlLog
element of @Insert
.
@Insert(sqlLog = SqlLogType.RAW)
int insert(Employee employee);
SqlLogType.RAW
is represented that the log is outputted sql with a bind parameter.
Update¶
Contents
Annotate with @Update
to Dao method for execute update.
@Config(config = AppConfig.class)
public interface EmployeeDao {
@Update
int update(Employee employee);
@Update
Result<ImmutableEmployee> update(ImmutableEmployee employee);
}
By default UPDATE statement is auto generated.
You can mapping arbitrary SQL file by specifying true
to sqlFile
property within the @Update
annotation.
The preUpdate
method of entity listener is called when before executing update if the entity listener is specified at entity class parameter.
Also the postUpdate
method of entity listener method is called when after executing update.
Return value¶
Return value must be org.seasar.doma.jdbc.Result
that has entity class as an element if parameter is immutable entity class.
Return value must be int
that is represented updated count if the above conditions are not satisfied.
Update by auto generated SQL¶
Parameter type must be entity class. Specifiable parameter is only one. Parameter must not be null.
@Update
int update(Employee employee);
@Update
Result<ImmutableEmployee> update(ImmutableEmployee employee);
Version number and optimistic concurrency control in auto generated SQL¶
Optimistic concurrency control is executed if you satisfied below conditions.
- Entity class within parameter has property that is annotated with @Version
- The ignoreVersion element within @Update annotation is false
If optimistic concurrency control is enable, version number is included with identifier in update condition and is updated increment by 1.
OptimisticLockException
representing optimistic concurrency control failure is thrown, if at that time updated count is 0.
Also, OptimisticLockException
is not thrown and version property within entity is increment by 1 if updated count is not 0.
ignoreVersion¶
If ignoreVersion
property within @Update
annotation is true,
version number is not include in update condition and be included in SET clauses within UPDATE statement.
Version number is updated by setting value at application.
OptimisticLockException
is not thrown in this case, even if update count is 0.
@Update(ignoreVersion = true)
int update(Employee employee);
suppressOptimisticLockException¶
In case of suppressOptimisticLockException
property within @Update
is true,
if property that annotated with @Version
is exists then version number is include in update condition and be increment by 1
but OptimisticLockException
is not thrown even if update count is 0.
However, version property value within entity is increment by 1.
@Update(suppressOptimisticLockException = true)
int update(Employee employee);
Control updating target property¶
updatable¶
The updatable
property within @Column
annotation that is specified false
is excluded from updating target if entity class has property that is annotated with @Column
.
exclude¶
Property that is specified with exclude
property within the @Update
annotation is excluded from updating target.
Even if updatable
property within @Column
annotation is specified true
the property is excluded from updating target if the property is specified by this element.
@Update(exclude = {"name", "salary"})
int update(Employee employee);
include¶
Only property that is specified with include
property within @Update
annotation is included to updating target.
If same property are specified with both of include
property and exclude
property within @Update
the property is excluded from updating target.
Even if property is specified with this element the property is excluded from updating target if updatable
property within @Column
annotation is false
.
@Update(include = {"name", "salary"})
int update(Employee employee);
excludeNull¶
Property that value is null
is excluded from updating target if excludeNull
property within @Update
annotation is specified true
.
If this element is true
, even if updatable
property within @Column
annotation is specified true
or property is specified with include
property within @Update
annotation
the property is excluded from insertion target if value is null
.
@Update(excludeNull = true)
int update(Employee employee);
includeUnchanged¶
This element is enable only if property that annotated with @OriginalStates
is exists within updating target entity class.
All property within entity is updating target if this element is true. That is, the column corresponding to all property is included in SET clauses within UPDATE statement.
Only properties that have actually changed since the entity is updating target if this element is false
.
That is, only the column corresponding to modified property is included in SET clauses within UPDATE statement.
@Update(includeUnchanged = true)
int update(Employee employee);
Update by SQL file¶
To execute updating by SQL file,
you set true
to sqlFile
property within @Update
annotation and prepare SQL file that correspond method.
Note
In updating by SQL file, rule is different with or without use ref:populate.
Case of using comment that generating update column list¶
First parameter type must be entity class.
Specifiable parameters count is no limit.
You can set null
to parameter if parameter type is basic type or domain class.
Parameter must not be null
if the type is other than that.
@Update(sqlFile = true)
int update(Employee employee, BigDecimal salary);
@Update(sqlFile = true)
Result<ImmutableEmployee> update(ImmutableEmployee employee, , BigDecimal salary);
For example, you describe SQL file like below to correspond above method.
update employee set /*%populate*/ id = id where salary > /* salary */0
The rule about controlling updating target property is same as Update by auto generated SQL.
Case of not using comment that generating update column list¶
You can use arbitrary type as parameter.
Specifiable parameters count is no limit.
You can set null
to parameter if parameter type is basic type or domain class.
Parameter must not be null
if the type is other than that.
@Update(sqlFile = true)
int update(Employee employee);
@Update(sqlFile = true)
Result<ImmutableEmployee> update(ImmutableEmployee employee);
For example, you describe SQL file like below to correspond above method.
update employee set name = /* employee.name */'hoge', salary = /* employee.salary */100
where id = /* employee.id */0
exclude
property and include
property, excludeNull
property, includeUnchanged
property they are within @Update
annotation are not referenced in updating by SQL file.
Version number and optimistic concurrency control in SQL file¶
Optimistic concurrency control is executed if you satisfied below conditions.
- Entity class is included in parameter.
- Entity class at first from the left within parameter has property that is annotated with @Version
- The ignoreVersion element within @Update annotation is false
However, describing to SQL file for Optimistic concurrency control SQL is application developer’s responsibility. For example like below SQL, you must specify version number in WHERE clauses and increment version number by 1 in SET clauses.
update EMPLOYEE set DELETE_FLAG = 1, VERSION = /* employee.version */1 + 1
where ID = /* employee.id */1 and VERSION = /* employee.version */1
OptimisticLockException
representing optimistic concurrency control failure is thrown, if this SQL updated count is 0.
OptimisticLockException
is not thrown and version property within entity is increment by 1 if updated count is not 0.
If ignoreVersion
property within @Update
annotation is true,
OptimisticLockException
is not thrown even if update count is 0.
Also, version property value within entity is not modified.
@Update(sqlFile = true, ignoreVersion = true)
int update(Employee employee);
If suppressOptimisticLockException
property within @Update
annotation is true,
OptimisticLockException
is not thrown even if update count is 0.
However, version property value within entity is incremented by 1.
@Update(sqlFile = true, suppressOptimisticLockException = true)
int update(Employee employee);
Unique constraint violation¶
UniqueConstraintException
is thrown regardless of with or without using sql file if unique constraint violation is occurred.
Query timeout¶
You can specify seconds of query timeout to queryTimeout
property within @Update
annotation.
@Update(queryTimeout = 10)
int update(Employee employee);
This specifying is applied regardless of with or without using sql file.
Query timeout that is specified in Configuration is used if queryTimeout
property is not set value.
SQL log output format¶
You can specify SQL log output format to sqlLog
property within @Update
annotation.
@Update(sqlLog = SqlLogType.RAW)
int update(Employee employee);
SqlLogType.RAW
represent outputting log that is sql with a binding parameter.
Delete¶
Contents
Annotate with @Delete
to Dao method for execute delete.
@Config(config = AppConfig.class)
public interface EmployeeDao {
@Delete
int delete(Employee employee);
}
By default DELETE statement is auto generated.
You can mapping arbitrary SQL file by specifying true
to sqlFile
property within the @Delete
annotation.
The preDelete
method of entity listener is called when before executing delete if the entity listener is specified at entity class parameter.
Also the postDelete
method of entity listener is called when after executing delete.
Return value¶
Return value must be int
that represent update count.
Delete by auto generated SQL¶
Parameter type must be entity class.
Specifiable parameter is only one.
Parameter must not be null
.
@Delete
int delete(Employee employee);
@Delete
Result<ImmutableEmployee> delete(ImmutableEmployee employee);
Version number and optimistic concurrency control in auto generated SQL¶
Optimistic concurrency control is executed if you satisfied below conditions.
- Entity class within parameter has property that is annotated with @Version
- The ignoreVersion element within @Delete annotation is false
If optimistic concurrency control is enable, version number is included with identifier in delete condition.
OptimisticLockException
representing optimistic concurrency control failure is thrown, if at that time delete count is 0.
ignoreVersion¶
If ignoreVersion
property within @Delete
annotation is true, version number is not include in delete condition.
OptimisticLockException
is not thrown in this case, even if delete count is 0.
@Delete(ignoreVersion = true)
int delete(Employee employee);
suppressOptimisticLockException¶
If suppressOptimisticLockException
property within @Delete
is true
, version number is included in delete condition.
But in this case OptimisticLockException
is not thrown even if delete count is 0.
@Delete(suppressOptimisticLockException = true)
int delete(Employee employee);
Delete by SQL file¶
To execute deleting by SQL file, you set true
to sqlFile
property within @Delete
annotation and prepare SQL file that correspond method.
You can use arbitrary type as parameter.
Specifiable parameters count is no limit.
You can set null
to parameter if parameter type is basic type or domain class.
Parameter must not be null
if the type is other than that.
Entity listener method is not called even if the entity listener is specified to entity.
@Delete(sqlFile = true)
int delete(Employee employee);
For example, you describe SQL file like below to correspond above method.
delete from employee where name = /* employee.name */'hoge'
Version number and optimistic concurrency control in SQL File¶
Optimistic concurrency control is executed if you satisfied below conditions.
- Entity class is included in parameter
- Entity class at first from the left within parameter has property that is annotated with @Version
- The ignoreVersion property within @Delete annotation is false
- The suppressOptimisticLockException property within @Delete annotation is false
However, describing to SQL file for Optimistic concurrency control SQL is application developer’s responsibility. For example like below SQL, you must specify version number in WHERE clauses.
delete from EMPLOYEE where ID = /* employee.id */1 and VERSION = /* employee.version */1
OptimisticLockException
representing optimistic concurrency control failure is thrown, if this SQL delete count is 0.
OptimisticLockException
is not thrown if delete count is not 0.
ignoreVersion¶
If ignoreVersion
property within @Delete
annotation is true
,
OptimisticLockException
is not thrown even if delete count is 0.
@Delete(sqlFile = true, ignoreVersion = true)
int delete(Employee employee);
suppressOptimisticLockException¶
If suppressOptimisticLockException
property within @Delete
annotation is true
,
OptimisticLockException
is not thrown even if delete count is 0.
@Delete(sqlFile = true, suppressOptimisticLockException = true)
int delete(Employee employee);
Query timeout¶
You can specify seconds of query timeout to queryTimeout
property within @Delete
annotation.
@Delete(queryTimeout = 10)
int delete(Employee employee);
This specifying is applied regardless of with or without using sql file.
Query timeout that is specified in Configuration is used if queryTimeout
property is not set value.
SQL log output format¶
You can specify SQL log output format to sqlLog
property within @Delete
annotation.
@Delete(sqlLog = SqlLogType.RAW)
int delete(Employee employee);
SqlLogType.RAW
represent outputting log that is sql with a binding parameter.
バッチ挿入¶
目次
バッチ挿入を行うには、 @BatchInsert
をDaoのメソッドに注釈します。
@Config(config = AppConfig.class)
public interface EmployeeDao {
@BatchInsert
int[] insert(List<Employee> employees);
@BatchInsert
BatchResult<ImmutableEmployee> insert(List<ImmutableEmployee> employees);
}
デフォルトでは、INSERT文が自動生成されます。
@BatchInsert
の sqlFile
に true
を設定することで、任意のSQLファイルにマッピングできます。
パラメータの要素のに Entity classes が指定されている場合、
挿入の実行前にエンティティリスナーの preInsert
メソッドをエンティティごとに呼び出します。
また、挿入の実行後にエンティティリスナーの postInsert
メソッドをエンティティごとに呼び出します。
戻り値¶
パラメータ Iterable
のサブタイプの要素がイミュータブルなエンティティクラスの場合、
戻り値はそのエンティティクラスを要素とする org.seasar.doma.BatchResult
でなければいけません。
上記の条件を満たさないない場合、戻り値は各更新処理の更新件数を表す int[]
でなければいけません。
SQLの自動生成によるバッチ挿入¶
パラメータの型は Entity classes 要素とする java.lang.Iterable
のサブタイプでなければいけません。
指定できるパラメータの数は1つです。
引数は null
であってはいけません。
戻り値の配列の要素の数はパラメータの Iterable
の要素の数と等しくなります。
配列のそれぞれの要素が更新された件数を返します。
識別子¶
Entity classes の識別子に、 @GeneratedValue
が注釈されている場合、
識別子が自動的に生成され設定されます。
注意点については Id generation を参照ください。
バージョン番号¶
Entity classes に @Version
が注釈されたプロパティがある場合、
そのプロパティに明示的に 0
以上の値が設定されていればその値を使用します。
もし設定されていないか、 0
未満の値が設定されていれば 1
を自動で設定します。
挿入対象プロパティ¶
insertable¶
Entity classes に @Column
が注釈されたプロパティがある場合、
@BatchInsert
の insertable
要素が false
のものは挿入対象外です。
exclude¶
@BatchInsert
の exclude
要素に指定されたプロパティを挿入対象外とします。
プロパティがこの要素に指定されていれば、 @Column
の insertable
要素が true
であっても挿入対象外です。
@BatchInsert(exclude = {"name", "salary"})
int[] insert(List<Employee> employees);
include¶
@BatchInsert
の include
要素に指定されたプロパティのみを挿入対象とします。
@BatchInsert
の include
要素と exclude
要素の両方に同じプロパティが指定された場合、
そのプロパティは挿入対象外になります。
プロパティがこの要素に指定されていても、 @Column
の insertable
要素が false
であれば挿入対象外です。
@BatchInsert(include = {"name", "salary"})
int[] insert(List<Employee> employees);
SQLファイルによるバッチ挿入¶
SQLファイルによるバッチ挿入を行うには、 @BatchInsert
の sqlFile
要素に true
を設定し、
メソッドに対応するSQLファイルを用意します。
@BatchInsert(sqlFile = true)
int[] insert(List<Employee> employees);
@BatchInsert(sqlFile = true)
BatchResult<ImmutableEmployee> insert(List<ImmutableEmployee> employees);
パラメータは任意の型を要素とする java.lang.Iterable
のサブタイプでなければいけません。
指定できるパラメータの数は1つです。
引数は null
であってはいけません。
戻り値の配列の要素の数はパラメータの Iterable
の要素の数と等しくなります。
配列のそれぞれの要素が更新された件数を返します。
Entity classes にエンティティリスナーが指定されていても、エンティティリスナーのメソッドは呼び出しません。
たとえば、上記のメソッドに対応するSQLは次のように記述します。
insert into employee (id, name, salary, version)
values (/* employees.id */0, /* employees.name */'hoge', /* employees.salary */100, /* employees.version */0)
SQLファイル上では、パラメータの名前は java.lang.Iterable
のサブタイプの要素を指します。
SQLファイルによるバッチ挿入では、識別子の自動設定やバージョン番号の自動設定は行われません。
また、 @BatchInsert
の exclude
要素、 include
要素は参照されません。
クエリタイムアウト¶
@BatchInsert
の queryTimeout
要素にクエリタイムアウトの秒数を指定できます。
@BatchInsert(queryTimeout = 10)
int[] insert(List<Employee> employees);
この指定は、SQLファイルの使用の有無に関係なく適用されます。
queryTimeout
要素に値を指定しない場合、 Configuration に指定されたクエリタイムアウトが使用されます。
バッチサイズ¶
@BatchInsert
の batchSize
要素にバッチサイズを指定できます。
@BatchInsert(batchSize = 10)
int[] insert(List<Employee> employees);
この指定は、SQLファイルの使用の有無に関係なく適用されます。
batchSize
要素に値を指定しない場合、 Configuration に指定されたバッチサイズが使用されます。
SQL のログ出力形式¶
@BatchInsert
の sqlLog
要素に SQL のログ出力形式を指定できます。
@BatchInsert(sqlLog = SqlLogType.RAW)
int insert(Employee employee);
SqlLogType.RAW
はバインドパラメータ(?)付きの SQL をログ出力することを表します。
Batch update¶
Contents
Annotate with @BatchUpdate
to Dao method for execute batch update.
@Config(config = AppConfig.class)
public interface EmployeeDao {
@BatchUpdate
int[] update(List<Employee> employees);
@BatchUpdate
BatchResult<ImmutableEmployee> update(List<ImmutableEmployee> employees);
}
By default UPDATE statement is auto generated.
You can mapping arbitrary SQL file by specifying true
to sqlFile
property within the @BatchUpdate
annotation.
The preUpdate
method of entity listener is called each entity when before executing update if the entity listener is specified at Entity classes parameter.
Also the postUpdate
method of entity listener method is called each entity when after executing update.
Return value¶
Return value must be org.seasar.doma.jdbc.Result
that has entity class as an element if parameter Iterable
subtype element is immutable entity class.
Return value must be int[]
that is represented each updating process’s updated count if the above conditions are not satisfied.
Batch update by auto generated SQL¶
Parameter type must be java.lang.Iterable
subtype that has Entity classes as an element.
Specifiable parameter is only one.
Parameter must not be null
.
Return value array element count become equal Iterable
element count.
Update count is returned to array each element.
Version number and optimistic concurrency control in auto generated SQL¶
Optimistic concurrency control is executed if you satisfied below conditions.
- Entity classes within parameter java.lang.Iterable subtype has property that is annotated with @Version
- The ignoreVersion element within @BatchUpdate annotation is false
If optimistic concurrency control is enable, version number is included with identifier in update condition and is updated increment by 1.
BatchOptimisticLockException
representing optimistic concurrency control failure is thrown, if at that time updated count is 0.
Also, BatchOptimisticLockException
is not thrown and version property within entity is increment by 1 if updated count is 1.
ignoreVersion¶
If ignoreVersion
property within @BatchUpdate
annotation is true,
version number is not include in update condition and be included in SET clauses within UPDATE statement.
Version number is updated by setting value at application.
BatchOptimisticLockException
is not thrown in this case, even if update count is 0.
@BatchUpdate(ignoreVersion = true)
int[] update(List<Employee> employees);
suppressOptimisticLockException¶
In case of suppressOptimisticLockException
property within @BatchUpdate
is true
,
if property that annotated with @Version
is exists then version number is include in update condition and be increment by 1
but BatchOptimisticLockException
is not thrown even if update count is 0.
However, version property value within entity is increment by 1.
@BatchUpdate(suppressOptimisticLockException = true)
int[] update(List<Employee> employees);
Update target property¶
updatable¶
The updatable
property within @Column
annotation that is specified false
is excluded from updating target if Entity classes has property that is annotated with @Column
.
exclude¶
Property that is specified with exclude
property within the @BatchUpdate
annotation is excluded from updating target.
Even if updatable
property within @Column
annotation is specified true
the property is excluded from updating target if the property is specified by this element.
@BatchUpdate(exclude = {"name", "salary"})
int[] update(List<Employee> employees);
include¶
Only property that is specified with include
property within @BatchUpdate
annotation is included to updating target.
If same property are specified with both of include
property and exclude
property within @BatchUpdate
the property is excluded from updating target.
Even if property is specified with this element the property is excluded from updating target if updatable
property within @Column
annotation is false
.
@BatchUpdate(include = {"name", "salary"})
int[] update(List<Employee> employees);
Batch update by SQL file¶
To execute batch updating by SQL file,
you set true
to sqlFile
property within @BatchUpdate
annotation and prepare SQL file that correspond method.
Note
In batch updating by SQL file, rule is different according to using or not using Population directive.
Case of using comment that generating update column list¶
@BatchUpdate(sqlFile = true)
int[] update(List<Employee> employees);
@BatchUpdate
BatchResult<ImmutableEmployee> update(List<ImmutableEmployee> employees);
Parameter type must be java.lang.Iterable
subtype that has Entity classes as an element.
Specifiable parameter is only one.
Parameter must not be null
.
Return value array element count become equal Iterable
element count.
Update count is returned to array each element.
For example, you describe SQL like below to correspond above method.
update employee set /*%populate*/ id = id where name = /* employees.name */'hoge'
Parameter name indicate Iterable
subtype element in SQL file.
The rule that is about update target property equals Batch update by auto generated SQL.
Case of not using comment that generating update column list¶
@BatchUpdate(sqlFile = true)
int[] update(List<Employee> employees);
@BatchUpdate
BatchResult<ImmutableEmployee> update(List<ImmutableEmployee> employees);
Parameter type must be java.lang.Iterable
subtype that has arbitrary type as an element.
Specifiable parameter is only one.
Parameter must not be null
.
Return value array element count become equal Iterable
element count.
Update count is returned to array each element.
For example, you describe SQL like below to correspond above method.
update employee set name = /* employees.name */'hoge', salary = /* employees.salary */100
where id = /* employees.id */0
Parameter name indicate Iterable
subtype element in SQL file.
Version number auto updating is not executed in batch update by SQL file.
Also, exclude
property and include
property within @BatchUpdate
annotation are not referenced.
Version number and optimistic concurrency control in SQL file¶
Optimistic concurrency control is executed if you satisfied below conditions.
- java.lang.Iterable subtype element in parameter is Entity classes and has property that is annotated @Version existing at Entity classes.
- ignoreVersion property within @BatchUpdate annotation is false.
However, describing to SQL file for Optimistic concurrency control SQL is application developer’s responsibility. For example like below SQL, you must specify version number in WHERE clauses and increment version number by 1 in SET clauses.
update EMPLOYEE set DELETE_FLAG = 1, VERSION = /* employees.version */1 + 1
where ID = /* employees.id */1 and VERSION = /* employees.version */1
BatchOptimisticLockException
representing optimistic concurrency control failure is thrown, if this SQL updated count is 0.
BatchOptimisticLockException
is not thrown and version property within entity is increment by 1 if updated count is not 0.
If optimistic concurrency control is enable, version number is included with identifier in update condition and is updated increment by 1.
BatchOptimisticLockException
representing optimistic concurrency control failure is thrown, if at that time updated count is 0.
On the other hand, if update count is 1, BatchOptimisticLockException
is not thrown and entity version property is increment by 1.
If ignoreVersion
property within @BatchUpdate
annotation is true,
BatchOptimisticLockException
is not thrown, even if update count is 0 or multiple.
Also, entity version property is not modified.
@BatchUpdate(sqlFile = true, ignoreVersion = true)
int[] update(List<Employee> employees);
In case of suppressOptimisticLockException
property within @BatchUpdate
is true
,
BatchOptimisticLockException
is not thrown even if update count is 0.
However, entity version property value is incremented by 1.
@BatchUpdate(sqlFile = true, suppressOptimisticLockException = true)
int[] update(List<Employee> employees);
Unique constraint violation¶
UniqueConstraintException
is thrown regardless of with or without using sql file if unique constraint violation is occurred.
Query timeout¶
You can specify seconds of query timeout to queryTimeout
property within @BatchUpdate
annotation.
@BatchUpdate(queryTimeout = 10)
int[] update(List<Employee> employees);
This specifying is applied regardless of with or without using sql file.
Query timeout that is specified in config class is used if queryTimeout
property is not set value.
Batch size¶
You can specify batch size to batchSize
property within @BatchUpdate
annotation.
@BatchUpdate(batchSize = 10)
int[] update(List<Employee> employees);
This specify is applied Regardless of using or not using SQL file.
It you do not specify the value to batchSize
property, batch size that is specified at Configuration class is applied.
SQL log output format¶
You can specify SQL log output format to sqlLog
property within @BatchUpdate
annotation.
@BatchUpdate(sqlLog = SqlLogType.RAW)
int[] update(List<Employee> employees);
SqlLogType.RAW
represent outputting log that is sql with a binding parameter.
バッチ削除¶
目次
バッチ削除を行うには、 @BatchDelete
をDaoのメソッドに注釈します。
@Config(config = AppConfig.class)
public interface EmployeeDao {
@BatchDelete
int[] delete(List<Employee> employees);
...
}
デフォルトでは、DELETE文が自動生成されます。
@BatchDelete
の sqlFile
に true
を設定することで、任意のSQLファイルにマッピングできます。
パラメータの要素である Entity classes にエンティティリスナーが指定されている場合、
削除の実行前にエンティティリスナーの preDelete
メソッドがエンティティごとに呼び出されます。
また、削除の実行後にエンティティリスナーの postDelete
メソッドがエンティティごとに呼び出されます。
SQLの自動生成によるバッチ削除¶
パラメータの型は Entity classes を要素とする java.lang.Iterable
のサブタイプでなければいけません。
指定できるパラメータの数は1つです。
引数は null
であってはいけません。
戻り値の配列の要素の数はパラメータの Iterable
の要素の数と等しくなります。
配列のそれぞれの要素が更新された件数を返します。
SQL自動生成におけるバージョン番号と楽観的排他制御¶
次の条件を満たす場合に、楽観的排他制御が行われます。
- パラメータのjava.lang.Iterableのサブタイプの要素である Entity classes に@Versionが注釈されたプロパティがある
- @BatchDeleteのignoreVersion要素がfalseである
楽観的排他制御が有効であれば、バージョン番号は識別子とともに削除条件に含まれます。
この場合、削除件数が0件であれば、楽観的排他制御の失敗を示す
BatchOptimisticLockException
がスローされます。
ignoreVersion¶
@BatchDelete
の ignoreVersion
要素が true
の場合、
バージョン番号は削除条件には含まれません。
削除件数が0件であっても BatchOptimisticLockException
はスローされません。
@BatchDelete(ignoreVersion = true)
int[] delete(List<Employee> employees);
suppressOptimisticLockException¶
@BatchDelete
の suppressOptimisticLockException
要素が true
の場合、
バージョン番号は削除条件に含まれますが、
削除件数が0件であっても BatchOptimisticLockException
はスローされません。
@BatchDelete(suppressOptimisticLockException = true)
int[] delete(List<Employee> employees);
SQLファイルによるバッチ削除¶
SQLファイルによるバッチ削除を行うには、 @BatchDelete
の sqlFile
要素に
true
を設定し、 メソッドに対応するSQLファイルを用意します。
@BatchDelete(sqlFile = true)
int[] delete(List<Employee> employees);
パラメータは任意の型を要素とする java.lang.Iterable
のサブタイプでなければいけません。
指定できるパラメータの数は1つです。
引数は null
であってはいけません。
戻り値の配列の要素の数はパラメータの Iterable
の要素の数と等しくなります。
配列のそれぞれの要素が更新された件数を返します。
たとえば、上記のメソッドに対応するSQLは次のように記述します。
delete from employee where name = /* employees.name */'hoge'
SQLファイル上では、パラメータの名前は java.lang.Iterable
のサブタイプの要素を指します。
SQLファイルにおけるバージョン番号と楽観的排他制御¶
次の条件を満たす場合に、楽観的排他制御が行われます。
- パラメータの
java.lang.Iterable
のサブタイプの要素が Entity classes であり、 Entity classes に@Versionが注釈されたプロパティがある - @BatchDeleteのignoreVersion要素がfalseである
ただし、SQLファイルに楽観的排他制御用のSQLを記述するのは、アプリケーション開発者の責任です。 たとえば、下記のSQLのように、WHERE句でバージョンを番号を指定しなければいけません。
delete from EMPLOYEE where ID = /* employees.id */1 and VERSION = /* employees.version */1
このSQLの削除件数が0件または複数件の場合、
楽観的排他制御の失敗を示す BatchOptimisticLockException
がスローされます。
ignoreVersion¶
@BatchDelete
の ignoreVersion
要素が true
の場合、削除件数が0件または複数件であっても
BatchOptimisticLockException
はスローされません。
@BatchDelete(sqlFile = true, ignoreVersion = true)
int[] delete(List<Employee> employees);
suppressOptimisticLockException¶
@BatchDelete
の suppressOptimisticLockException
要素が true
の場合、削除件数が0件または複数件であっても
BatchOptimisticLockException
はスローされません。
@BatchDelete(sqlFile = true, suppressOptimisticLockException = true)
int[] delete(List<Employee> employees);
クエリタイムアウト¶
@BatchDelete
の queryTimeout
要素にクエリタイムアウトの秒数を指定できます。
@BatchDelete(queryTimeout = 10)
int[] delete(List<Employee> employees);
この指定は、SQLファイルの使用の有無に関係なく適用されます。
queryTimeout
要素に値を指定しない場合、
Configuration に指定されたクエリタイムアウトが使用されます。
バッチサイズ¶
@BatchDelete
の batchSize
要素にバッチサイズを指定できます。
@BatchDelete(batchSize = 10)
int[] delete(List<Employee> employees);
この設定は、SQLファイルの使用の有無に関係なく適用されます。
batchSize
要素に値を指定しない場合、 Configuration に指定されたバッチサイズが使用されます。
SQL のログ出力形式¶
@BatchDelete
の sqlLog
要素に SQL のログ出力形式を指定できます。
@BatchDelete(sqlLog = SqlLogType.RAW)
int[] delete(List<Employee> employees);
SqlLogType.RAW
はバインドパラメータ(?)付きの SQL をログ出力することを表します。
ストアドファンクション¶
目次
ストアドファンクションを呼び出すには、 @Function
をDaoのメソッドに注釈します。
@Config(config = AppConfig.class)
public interface EmployeeDao {
@Function
Integer execute(@In Integer id, @InOut Reference<BigDecimal> salary);
...
}
パラメータには、パラメータの種別を示す @In
、 @InOut
、 @Out
、 @ResultSet
のいずれかのアノテーションが必須です。
パラメータは複数指定できます。
戻り値¶
戻り値は次のいずれかでなければいけません。
1件を返す場合¶
- Basic classes
- Domain classes
- Entity classes
- java.util.Map<String, Object>
- Basic classes 、 Domain classes 、 Entity classes 、 java.util.Map<String, Object> のいずれかを要素とするjava.util.Optional
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
複数件を返す場合¶
次のいずれかを要素とする java.util.List
- Basic classes
- Domain classes
- Entity classes
- java.util.Map<String, Object>
- Basic classes もしくは Domain classes のいずれかを要素とするjava.util.Optional
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
ただし、複数件を返せるのはファンクションがカーソルを実行結果として返す場合のみです。
ファンクション名¶
デフォルトではメソッド名がファンクション名になります。
@Function
の name
要素に値を指定した場合は、その値がファンクション名になります。
@Function(name = "calculateSalary")
void execute(@In Integer id, @InOut Reference<BigDecimal> salary);
@Function
の catalog
要素や schema
要素にカタログ名やスキーマ名を指定できます。
このときファンクションの名前は catalog
要素、 schema
要素、 name
要素
(指定されていなければメソッド名)をピリオドで連結したものになります。
@Function(catlog = "CATALOG", schema ="SCHEMA", name = "calculateSalary")
void execute(@In Integer id, @InOut Reference<BigDecimal> salary);
戻り値の型が Entity classes や Entity classes を要素とする java.util.List
の場合において、
エンティティのプロパティすべてに対して漏れなく結果セットのカラムをマッピングすることを保証したい場合は、
@Function
の ensureResultMapping
要素に true
を指定します。
@Function(ensureResultMapping = true)
List<Employee> execute();
結果セットのカラムにマッピングされないプロパティが存在する場合
ResultMappingException
がスローされます。
パラメータ¶
ストアドファンクションのパラメータとDaoメソッドのパラメータの並び順は合わせなければいけません。
INパラメータ¶
INパラメータは、 @In
をメソッドのパラメータに注釈して示します。
指定可能なパラメータの型は以下の通りです。
- Basic classes
- Domain classes
- Basic classes または Domain classes を要素とするjava.util.Optional
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
パラメータの型が基本型もしくはドメインクラスの場合、引数を null
にできます。
それ以外の型の場合、引数は null
であってはいけません。
@Function
void execute(@In Integer id);
次のように使用します。
EmployeeDao dao = new EmployeeDaoImpl();
dao.execute(1);
INOUTパラメータ¶
INOUTパラメータは、 @InOut
をメソッドのパラメータに注釈して示します。
注釈されるパラメータの型は org.seasar.doma.jdbc.Reference
でなければいけません。
Reference
の型パラメータに指定できる型は以下の通りです。
- Basic classes
- Domain classes
- Basic classes または Domain classes を要素とするjava.util.Optional
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
引数は null
であってはいけません。
@Function
void execute(@InOut Reference<BigDecimal> salary);
次のように使用します。
EmployeeDao dao = new EmployeeDaoImpl();
BigDecimal in = new BigDecimal(100);
Reference<BigDecimal> ref = new Reference<BigDecimal>(in);
dao.execute(ref);
BigDecimal out = ref.get();
OUTパラメータ¶
OUTパラメータは、 @Out
をメソッドのパラメータに注釈して示します。
注釈されるパラメータの型は org.seasar.doma.jdbc.Reference
でなければいけません。
Reference
の型パラメータに指定できる型は以下の通りです。
- Basic classes
- Domain classes
- Basic classes または Domain classes を要素とするjava.util.Optional
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
引数は null
であってはいけません。
メソッドの戻り値の型が void
以外の場合、戻り値はOUTパラメータとなります。
@Function
Integer execute(@Out Reference<BigDecimal> salary);
次のように使用します。
EmployeeDao dao = new EmployeeDaoImpl();
Reference<BigDecimal> ref = new Reference<BigDecimal>();
Integer result = dao.execute(ref);
BigDecimal out = ref.get();
カーソルのOUTパラメータもしくは結果セット¶
カーソルのOUTパラメータ、もしくはストアドファンクションが返す結果セットは、
@ResultSet
をメソッドのパラメータに注釈して示します。
注釈されるパラメータの型は、以下の型を要素とする java.util.List
でなければいけません。
- Basic classes
- Domain classes
- Entity classes
- java.util.Map<String, Object>
- Basic classes または Domain classes を要素とするjava.util.Optional
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
引数は null
であってはいけません。
@Function
void execute(@ResultSet List<Employee> employee);
次のように使用します。
EmployeeDao dao = new EmployeeDaoImpl();
List<Employee> employees = new ArrayList<Employee>();
dao.execute(employees);
for (Employee e : employees) {
...
}
@ResultSet
が注釈された java.util.List
の型パラメータが
Entity classes であり、かつ、エンティティのプロパティすべてに対して
漏れなく結果セットのカラムをマッピングすることを保証したい場合は、
@ResultSet
の ensureResultMapping
要素に true
を指定します。
@Function
void execute(@ResultSet(ensureResultMapping = true) List<Employee> employee);
結果セットのカラムにマッピングされないプロパティが存在する場合
ResultMappingException
がスローされます。
マップのキーのネーミング規約¶
結果セットを java.util.Map<String, Object>
にマッピングする場合、
@Function
の mapKeyNaming
要素にマップのキーのネーミング規約を指定できます。
@Function(mapKeyNaming = MapKeyNamingType.CAMEL_CASE)
List<Map<String, Object>> execute(@In Integer id);
MapKeyNamingType.CAMEL_CASE
は、カラム名をキャメルケースに変換することを示します。
そのほかに、カラム名をを大文字や小文字に変換する規約があります。
最終的な変換結果は、ここに指定した値と Configuration に指定された
MapKeyNaming
の実装により決まります。
SQL のログ出力形式¶
@Function
の sqlLog
要素に SQL のログ出力形式を指定できます。
@Function(sqlLog = SqlLogType.RAW)
void execute(@In Integer id);
SqlLogType.RAW
はバインドパラメータ(?)付きの SQL をログ出力することを表します。
ストアドプロシージャー¶
目次
ストアドプロシージャーを呼び出すには、 @Procedure
をDaoのメソッドに注釈します。
@Config(config = AppConfig.class)
public interface EmployeeDao {
@Procedure
void execute(@In Integer id, @InOut Reference<BigDecimal> salary);
...
}
パラメータには、パラメータの種別を示す @In
、 @InOut
、 @Out
、
@ResultSet
のいずれかのアノテーションが必須です。
パラメータは複数指定できます。
プロシージャー名¶
デフォルトではメソッド名がプロシージャー名になります。
@Procedure
の name
要素に値を指定した場合は、その値がプロシージャー名になります。
@Procedure(name = "calculateSalary")
void execute(@In Integer id, @InOut Reference<BigDecimal> salary);
@Procedure
の catalog
要素や schema
要素にカタログ名やスキーマ名を指定できます。
このときプロシージャーの名前は catalog
要素、 schema
要素、
name
要素(指定されていなければメソッド名)をピリオドで連結したものになります。
@Procedure(catlog = "CATALOG", schema ="SCHEMA", name = "calculateSalary")
void execute(@In Integer id, @InOut Reference<BigDecimal> salary);
パラメータ¶
ストアドプロシージャーのパラメータとDaoメソッドのパラメータの並び順は合わせなければいけません。
INパラメータ¶
INパラメータは、 @In
をメソッドのパラメータに注釈して示します。
指定可能なパラメータの型は以下の通りです。
- Basic classes
- Domain classes
- Basic classes または Domain classes を要素とするjava.util.Optional
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
パラメータの型が基本型もしくはドメインクラスの場合、引数を null
にできます。
それ以外の型の場合、引数は null
であってはいけません。
@Procedure
void execute(@In Integer id);
次のように使用します。
EmployeeDao dao = new EmployeeDaoImpl();
dao.execute(1);
INOUTパラメータ¶
INOUTパラメータは、 @InOut
をメソッドのパラメータに注釈して示します。
注釈されるパラメータの型は org.seasar.doma.jdbc.Reference
でなければいけません。
Reference
の型パラメータに指定できる型は以下の通りです。
- Basic classes
- Domain classes
- Basic classes または Domain classes を要素とするjava.util.Optional
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
引数は null
であってはいけません。
@Procedure
void execute(@InOut Reference<BigDecimal> salary);
次のように使用します。
EmployeeDao dao = new EmployeeDaoImpl();
BigDecimal in = new BigDecimal(100);
Reference<BigDecimal> ref = new Reference<BigDecimal>(in);
dao.execute(ref);
BigDecimal out = ref.get();
OUTパラメータ¶
OUTパラメータは、 @Out
をメソッドのパラメータに注釈して示します。
注釈されるパラメータの型は org.seasar.doma.jdbc.Reference
でなければいけません。
Reference
の型パラメータに指定できる型は以下の通りです。
- Basic classes
- Domain classes
- Basic classes または Domain classes を要素とするjava.util.Optional
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
引数は null
であってはいけません。
@Procedure
void execute(@Out Reference<BigDecimal> salary);
次のように使用します。
EmployeeDao dao = new EmployeeDaoImpl();
Reference<BigDecimal> ref = new Reference<BigDecimal>();
dao.execute(ref);
BigDecimal out = ref.get();
カーソルのOUTパラメータもしくは結果セット¶
カーソルのOUTパラメータ、もしくはストアドプロシージャーが返す結果セットは、
@ResultSet
をメソッドのパラメータに注釈して示します。
注釈されるパラメータの型は、以下の型を要素とする java.util.List
でなければいけません。
- Basic classes
- Domain classes
- Entity classes
- java.util.Map<String, Object>
- Basic classes または Domain classes を要素とするjava.util.Optional
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
引数は null
であってはいけません。
@Procedure
void execute(@ResultSet List<Employee> employees);
次のように使用します。
EmployeeDao dao = new EmployeeDaoImpl();
List<Employee> employees = new ArrayList<Employee>();
dao.execute(employees);
for (Employee e : employees) {
...
}
@ResultSet
が注釈された java.util.List
の型パラメータが
Entity classes であり、かつ、エンティティのプロパティすべてに対して
漏れなく結果セットのカラムをマッピングすることを保証したい場合は、
@ResultSet
の ensureResultMapping
要素に true
を指定します。
@Procedure
void execute(@ResultSet(ensureResultMapping = true) List<Employee> employee);
結果セットのカラムにマッピングされないプロパティが存在する場合
ResultMappingException
がスローされます。
マップのキーのネーミング規約¶
結果セットを java.util.Map<String, Object>
にマッピングする場合、
@Procedure
の mapKeyNaming
要素にマップのキーのネーミング規約を指定できます。
@Procedure(mapKeyNaming = MapKeyNamingType.CAMEL_CASE)
void execute(@ResultSet List<Map<String, Object>> employees);
MapKeyNamingType.CAMEL_CASE
は、カラム名をキャメルケースに変換することを示します。
そのほかに、カラム名をを大文字や小文字に変換する規約があります。
最終的な変換結果は、ここに指定した値と Configuration に指定された
MapKeyNaming
の実装により決まります。
SQL のログ出力形式¶
@Procedure
の sqlLog
要素に SQL のログ出力形式を指定できます。
@Procedure(sqlLog = SqlLogType.RAW)
void execute(@In Integer id);
SqlLogType.RAW
はバインドパラメータ(?)付きの SQL をログ出力することを表します。
ファクトリ¶
java.sql.Connection
が提供するファクトリメソッドからインスタンスを取得するには、
Daoのメソッドに次のアノテーションを注釈します。
- java.sql.Arrayを生成するには、@ArrayFactory
- java.sql.Blobを生成するには、@BlobFactory
- java.sql.Clobを生成するには、@ClobFactory
- java.sql.NClobを生成するには、@NClobFactory
- java.sql.SQLXMLを生成するには、@SQLXMLFactory
スクリプト¶
SQLスクリプトの実行を行うには、 @Script
をDaoのメソッドに注釈します。
@Config(config = AppConfig.class)
public interface EmployeeDao {
@Script
void createTable();
...
}
メソッドの戻り値の型は void
でなければいけません。パラメータの数は0でなければいけません。
また、メソッドに対応するスクリプトファイルが必須です。
スクリプトファイル¶
スクリプトファイルでは、
Dialect
が提供するRDBMS名や区切り文字が使用されます。
データベース | Dialectの名前 | RDBMS名 | 区切り文字 |
---|---|---|---|
DB2 | Db2Dialect | db2 | @ |
H2 Database Engine 1.2.126 | H212126Dialect | h2 | |
H2 Database | H2Dialect | h2 | |
HSQLDB | HsqldbDialect | hsqldb | |
Microsoft SQL Server 2008 | Mssql2008Dialect | mssql | GO |
Microsoft SQL Server | MssqlDialect | mssql | GO |
MySQL | MySqlDialect | mysql | / |
Oracle Database | OracleDialect | oracle | / |
PostgreSQL | PostgresDialect | postgres | $$ |
SQLite | SqliteDialect | sqlite |
ファイル名の形式¶
ファイル名は、次の形式でなければいけません。
META-INF/Daoのクラスの完全修飾名をディレクトリに変換したもの/Daoのメソッド名.script
例えば、 Daoのクラスが aaa.bbb.EmployeeDao
でマッピングしたいメソッドが
createTable
の場合、パス名は次のようになります。
META-INF/aaa/bbb/EmployeeDao/createTable.script
複数のRDBMSに対応する必要があり特定のRDBMSでは別のスクリプトファイルを使いたい場合、 .script の前にハイフン区切りでRDBMS名を入れることで、 優先的に使用するファイルを指示できます。 たとえば、PostgreSQL専用のSQLファイルは次の名前にします。
META-INF/aaa/bbb/EmployeeDao/createTables-postgres.script
この場合、PostgreSQLを使用している場合に限り、
META-INF/aaa/bbb/EmployeeDao/createTable.script
よりも
META-INF/aaa/bbb/EmployeeDao/createTable-postgres.script
が優先的に使用されます。
区切り文字¶
スクリプトファイルの区切り文字には、 ステートメントの区切り文字とブロックの区切り文字の2種類があります。
ステートメントの区切り文字はセミコロン ;
です。
ブロックの区切り文字は、 Dialect
が提供する値が使用されます。
ブロックの区切り文字は、アノテーションの blockDelimiter
要素で明示することもできます。
アノテーションで指定した場合、 Dialect
の値よりも優先されます。
@Script(blockDelimiter = "GO")
void createTable();
エラー発生時の継続実行¶
デフォルトでは、スクリプト中のどれかのSQLの実行が失敗すれば、
処理はそこで止まります。
しかし、アノテーションの haltOnError
要素に false
を指定することで、エラー発生時に処理を継続させることができます。
@Script(haltOnError = false)
void createTable();
記述例¶
スクリプトファイルは次のように記述できます。 この例は、Oracle Databaseに有効なスクリプトです。
/*
* テーブル定義(SQLステートメント)
*/
create table EMPLOYEE (
ID numeric(5) primary key, -- 識別子
NAME varchar2(20) -- 名前
);
/*
* データの追加(SQLステートメント)
*/
insert into EMPLOYEE (ID, NAME) values (1, 'SMITH');
/*
* プロシージャー定義(SQLブロック)
*/
create or replace procedure proc
( cur out sys_refcursor,
employeeid in numeric
) as
begin
open cur for select * from employee where id > employeeid order by id;
end proc_resultset;
/
/*
* プロシージャー定義2(SQLブロック)
*/
create or replace procedure proc2
( cur out sys_refcursor,
employeeid in numeric
) as
begin
open cur for select * from employee where id > employeeid order by id;
end proc_resultset;
/
コメントは1行コメント --
とブロックコメント /* */
の2種類が使用できます。
コメントは取り除かれてデータベースへ発行されます。
1つのSQLステートメントは複数行に分けて記述できます。
ステートメントはセミコロン ;
で区切らなければいけません。
改行はステートメントの区切りとはみなされません。
ストアドプロシージャーなどのブロックの区切りは、 Dialect
のデフォルトの値か、
@Script
の blockDelimiter
要素に指定した値を使用して示せます。
この例では、 OracleDialect
のデフォルトの区切り文字であるスラッシュ
/
を使用しています。
ブロックの 区切り文字は行頭に記述し、
区切り文字の後ろには何も記述しないようにしてください。
つまり、区切り文字だけの行としなければいけません。
SQL のログ出力形式¶
@Script
の sqlLog
要素に SQL のログ出力形式を指定できます。
@Script(sqlLog = SqlLogType.RAW)
void createTable();
SqlLogType.RAW
はバインドパラメータ(?)付きの SQL をログ出力することを表します。
SQLプロセッサ¶
SQLテンプレートで組み立てられたSQLをアプリケーションで扱うには、 @SqlProcessor
をDaoのメソッドに注釈します。
@Config(config = AppConfig.class)
public interface EmployeeDao {
@SqlProcessor
<R> R process(Integer id, BiFunction<Config, PreparedSql, R> handler);
...
}
メソッドに対応する SQL templates が必須です。
Warning
SQLプロセッサを使ってSQLを組み立て実行する場合、潜在的には常にSQLインジェクションのリスクがあります。 まずは、他のクエリもしくはクエリビルダを使う方法を検討してください。 また、SQLプロセッサでは信頼できない値をSQLの組み立てに使わないように注意してください。
戻り値¶
メソッドの戻り値は任意の型にできます。
ただし、 BiFunction
型のパラメータの3番目の型パラメータと合わせる必要があります。
なお、戻り値の型を void
にする場合、 BiFunction
型のパラメータの3番目の型パラメータには Void
を指定します。
@SqlProcessor
void process(Integer id, BiFunction<Config, PreparedSql, Void> handler);
パラメータ¶
BiFunction
型のパラメータを1つのみ含める必要があります。BiFunction
型のパラメータはSQLテンプレート処理後のSQLを処理するために使われます。
その他のパラメータはSQLテンプレートで参照できます。 基本的には、 Search の問い合わせ条件に指定できるのと同じ型を使用できます。
利用例¶
例えば、SQLテンプレートで処理したSQLをさらに変形し直接実行することができます。(この例では例外処理を省略しています。)
EmployeeDao dao = ...
dao.process(1, (config, preparedSql) -> {
String sql = preparedSql.getRawSql();
String anotherSql = createAnotherSql(sql);
DataSource dataSource = config.getDataSource()
Connection connection = dataSource.getConnection();
PreparedStatement statement = connection.prepareStatement(anotherSql);
return statement.execute();
});
select * from employee where id = /*^ id */0
Query builders¶
Contents
The package org.seasar.doma.jdbc.builder
provides SQL builders.
When it is difficult to build a SQL statement with Queries, consider to use the SQL builders in Default method.
Search¶
SelectBuilder builder = SelectBuilder.newInstance(config);
builder.sql("select");
builder.sql("id").sql(",");
builder.sql("name").sql(",");
builder.sql("salary");
builder.sql("from Emp");
builder.sql("where");
builder.sql("job_type = ").literal(String.class, "fulltime");
builder.sql("and");
builder.sql("name like ").param(String.class, "S%");
builder.sql("and");
builder.sql("age in (").params(Integer.class, Arrays.asList(20, 30, 40)).sql(")");
List<Emp> employees = builder.getEntityResultList(Emp.class);
You can get result of the SQL execution in various ways.
Single record search¶
- getScalarSingleResult
- getOptionalScalarSingleResult
- getEntitySingleResult
- getOptionalEntitySingleResult
- getMapSingleResult
- getOptionalMapSingleResult
Multiple records search¶
- getScalarResultList
- getOptionalScalarResultList
- getEntityResultList
- getMapResultList
Stream search¶
- streamAsScalar
- streamAsOptionalScalar
- streamAsEntity
- streamAsMap
Insert¶
InsertBuilder builder = InsertBuilder.newInstance(config);
builder.sql("insert into Emp");
builder.sql("(name, salary)");
builder.sql("values (");
builder.param(String.class, "SMITH").sql(", ");
builder.param(BigDecimal.class, new BigDecimal(1000)).sql(")");
builder.execute();
Update¶
UpdateBuilder builder = UpdateBuilder.newInstance(config);
builder.sql("update Emp");
builder.sql("set");
builder.sql("name = ").param(String.class, "SMIHT").sql(",");
builder.sql("salary = ").param(BigDecimal.class, new BigDecimal("1000"));
builder.sql("where");
builder.sql("id = ").param(int.class, 10);
builder.execute();
SQL templates¶
Contents
Overview¶
Doma supports SQL templates, called “two-way SQL”. “Two-way SQL” means that the SQL templates can be used in two ways:
- to build dynamic SQL statements from the templates; and
- to execute the templates in SQL tools as they are.
Every SQL template must correspond to a DAO method. For example, suppose you have the pair of an SQL template and a DAO method as follows:
select * from employee where employee_id = /* employeeId */99
Employee selectById(Integer employeeId);
The employeeId
expression enclosed between /*
and */
corresponds to
the method parameter “employeeId” of the DAO.
In runtime, the SQL comment and following number /* employeeId */99
is replaced with a bind variable ?
and the method parameter “employeeId” is passed to the variable.
The SQL statement generated from the SQL template is as follows:
select * from employee where employee_id = ?
The number 99
in the SQL template is test data and never used in runtime.
The test data is only useful when you execute the SQL template as is.
In other words, you can check whether the SQL template is grammatically correct with your favorite SQL tools.
Each SQL template is represented either a text file or an annotation.
SQL templates in files¶
You can specify SQL templates in text files:
@Dao
public interface EmployeeDao {
@Select
Employee selectById(Integer employeeId);
@Delete(sqlFile = true)
int deleteByName(Employee employee);
}
Above selectById
and deleteByName
methods are mapped onto their own SQL files.
DAO methods must be annotated with one of following annotations:
- @Select
- @Insert(sqlFile = true)
- @Update(sqlFile = true)
- @Delete(sqlFile = true)
- @BatchInsert(sqlFile = true)
- @BatchUpdate(sqlFile = true)
- @BatchDelete(sqlFile = true)
Location¶
The SQL files must be located in directories below a “META-INF” directory which is included in CLASSPATH.
Format of file path¶
The SQL file path must follow the following format:
META-INF/path-format-of-dao-interface/dao-method.sql
For example, when the DAO interface name is aaa.bbb.EmployeeDao
and the DAO method name is selectById
,
the SQL file path is as follows:
META-INF/aaa/bbb/EmployeeDao/selectById.sql
Dependency on a specific RDBMS¶
You can specify dependency on a specific RDBMS by file name. To do this, put the hyphen “-” and RDBMS name before the extension “.sql”. For example, the file path specific to PostgreSQL is as follows:
META-INF/aaa/bbb/EmployeeDao/selectById-postgres.sql
The SQL files specific to RDBMSs are given priority. For example, in the environment where PostgreSQL is used, “META-INF/aaa/bbb/EmployeeDao/selectById-postgres.sql” is chosen instead of “META-INF/aaa/bbb/EmployeeDao/selectById.sql”. But in other environment,
The RDBMS names are stem from dialects:
RDBMS | Dialect | RDBMS Name |
---|---|---|
DB2 | Db2Dialect | db2 |
H2 Database | H2Dialect | h2 |
HSQLDB | HsqldbDialect | hsqldb |
Microsoft SQL Server | MssqlDialect | mssql |
MySQL | MySqlDialect | mysql |
Oracle Database | OracleDialect | oracle |
PostgreSQL | PostgresDialect | postgres |
SQLite | SqliteDialect | sqlite |
SQL templates in annotations¶
You can specify SQL templates to DAO methods with the @Sql
annotation:
@Dao
public interface EmployeeDao {
@Sql("select * from employee where employee_id = /* employeeId */99")
@Select
Employee selectById(Integer employeeId);
@Sql("delete from employee where employee_name = /* employee.employeeName */'aaa'")
@Delete
int deleteByName(Employee employee);
}
The @Sql
annotation must be combined with following annotations:
- @Select
- @Script
- @Insert
- @Update
- @Delete
- @BatchInsert
- @BatchUpdate
- @BatchDelete
Warning
The @Sql
annotation is an experimental feature.
The full qualified name of @Sql
is @org.seasar.doma.experimental.Sql
.
Directives¶
In SQL templates, the SQL comments following the specific rules are recognised as directives. Supported directives are as follows:
- Bind variable directive
- Literal variable directive
- Embedded variable directive
- Condition directive
- Loop directive
- Expansion directive
- Population directive
Note
See also Expression language for information of the expression language available in directives.
Bind variable directive¶
Bind variable directive is represented with the format /*...*/
.
The expression enclosed between /*
and */
is evaluated and
its evaluation result is passed to bind variable in SQL statement.
The directive must be followed by test data, which is never used in runtime.
Basic and domain parameters¶
The parameter whose type is one of Basic classes and Domain classes is recognised as a bind variable.
The following example is the pair of a DAO method and an SQL template:
Employee selectById(Integer employeeId);
select * from employee where employee_id = /* employeeId */99
The following SQL statement is generated from the SQL template:
select * from employee where employee_id = ?
Iterable parameters in IN clause¶
The parameter whose type is subtype of java.lang.Iterable
is
recognised as bind variables in IN clause.
But the type argument of java.lang.Iterable
must be one of Basic classes or Domain classes.
The directives must be followed by test data enclosed between (
and )
.
The following example is the pair of a DAO method and an SQL template:
List<Employee> selectByIdList(List<Integer> employeeIdList);
select * from employee where employee_id in /* employeeIdList */(1,2,3)
In case that the employeeIdList
contains five elements,
the following SQL statement is generated from the SQL template:
select * from employee where employee_id in (?, ?, ?, ?, ?)
In case that the employeeIdList
is empty,
the IN clause is replaced with in (null)
in runtime:
select * from employee where employee_id in (null)
Literal variable directive¶
Literal variable directive is represented with the format /*^...*/
.
The expression enclosed between /*^
and */
is evaluated and
its evaluation result is converted to literal format to be embedded in SQL statement.
The directive must be followed by test data, which is never used in runtime.
The following example is the pair of a DAO method and an SQL template:
Employee selectByCode(String code);
select * from employee where code = /*^ code */'test'
The DAO method is invoked as follows:
EmployeeDao dao = new EmployeeDaoImpl();
List<Employee> list = dao.selectByCode("abc");
The generated SQL statement is as follows:
select * from employee where code = 'abc'
Note
Literal variable directives are helpful to avoid bind variables and fix SQL plans.
Warning
Literal variable directives do not escape parameters for SQL injection.
But the directives reject parameters containing the single quotation '
.
Embedded variable directive¶
Embedded variable directive is represented with the format /*#...*/
.
The expression enclosed between /*#
and */
is evaluated and
its evaluation result is embedded in SQL statement.
The following example is the pair of a DAO method and an SQL template:
List<Employee> selectAll(BigDecimal salary, String orderyBy);
select * from employee where salary > /* salary */100 /*# orderBy */
The DAO method is invoked as follows:
EmployeeDao dao = new EmployeeDaoImpl();
BigDecimal salary = new BigDecimal(1000);
String orderBy = "order by salary asc, employee_name";
List<Employee> list = dao.selectAll(salary, orderBy);
The generated SQL statement is as follows:
select * from employee where salary > ? order by salary asc, employee_name
Note
Embedded variable directives are helpful to build SQL fragments such as ORDER BY clause.
Warning
To prevent SQL injection vulnerabilities, embedded variable directives reject parameters containing the following values:
- a single quotation
'
- a semi colon
;
- two hyphen
--
- a slash and an asterisk
/*
Condition directive¶
Condition directive allows you to build SQL statements conditionally.
Synopsis¶
/*%if condition*/
...
/*%elseif condition2*/
...
/*%elseif condition3*/
...
/*%else*/
...
/*%end*/
The expressions condition
, condition2
, and condition3
must be evaluated
to either boolean
or java.lang.Boolean
.
The elseif
directives and the else
directive are optional.
if¶
Suppose you have the following SQL template:
select * from employee where
/*%if employeeId != null */
employee_id = /* employeeId */99
/*%end*/
If the employeeId
is not null
, the generated SQL statement is as follows:
select * from employee where employee_id = ?
If the employeeId
is null
, the generated SQL statement is as follows:
select * from employee
The SQL keyword where
is removed automatically.
elseif and else¶
Suppose you have the following SQL template:
select
*
from
employee
where
/*%if employeeId != null */
employee_id = /* employeeId */9999
/*%elseif department_id != null */
and
department_id = /* departmentId */99
/*%else*/
and
department_id is null
/*%end*/
If the employeeId != null
is evaluated true
, the generated SQL statement is as follows:
select
*
from
employee
where
employee_id = ?
If the employeeId == null && department_id != null
is evaluated true
,
the generated SQL statement is as follows:
select
*
from
employee
where
department_id = ?
The SQL keyword and
followed by department_id
is remove automatically:
If the employeeId == null && department_id == null
is evaluated true
,
the generated SQL statement is as follows:
select
*
from
employee
where
department_id is null
The SQL keyword and
followed by department_id
is remove automatically:
Nested condition directive¶
You can nest condition directives as follows:
select * from employee where
/*%if employeeId != null */
employee_id = /* employeeId */99
/*%if employeeName != null */
and
employee_name = /* employeeName */'hoge'
/*%else*/
and
employee_name is null
/*%end*/
/*%end*/
Removal of WHERE and HAVING clauses on condition directive¶
WHERE and HAVING clauses can be unnecessary on condition directive. Those clauses are removed automatically.
Suppose you have the following SQL template:
select * from employee where
/*%if employeeId != null */
employee_id = /* employeeId */99
/*%end*/
If the employeeId != null
is evaluated false
,
the generated SQL statement is as follows:
select * from employee
Because the SQL clause where
followed by /*%if ...*/
is unnecessary,
it is removed automatically.
Removal of AND and OR keywords on condition directives¶
AND and OR keywords can be unnecessary on condition directive. Those clauses are removed automatically.
Suppose you have the following SQL template:
select * from employee where
/*%if employeeId != null */
employee_id = /* employeeId */99
/*%end*/
and employeeName like 's%'
If the employeeId != null
is evaluated false
,
the generated SQL statement is as follows:
select * from employee where employeeName like 's%'
Because the SQL keyword and
following /*%end*/
is unnecessary,
it is removed automatically.
Restriction on condition directive¶
/*%if condition*/
and /*%end*/
must be included in
same SQL clause and in same statement level.
The following template is invalid, because /*%if condition*/
is
in the FROM clause and /*%end*/
is in the WHERE clause:
select * from employee /*%if employeeId != null */
where employee_id = /* employeeId */99 /*%end*/
The following template is invalid, because /*%if condition*/
is
in the outer statement and /*%end*/
is in the inner statement:
select * from employee
where employee_id in /*%if departmentId != null */(select ... /*%end*/ ...)
Loop directive¶
Loop directive allows you to build SQL statements using loop.
Synopsis¶
/*%for item : sequence*/
...
/*%end*/
The item
is the loop variable.
The expression sequence
must be evaluated to subtype of java.lang.Iterable
In the inside between /*%for item : sequence*/
and /*%end*/
,
two extra loop variables are available:
item_index: | The index (0-based number) of the current item in the loop |
---|---|
item_has_next: | Boolean value that tells if the current item is the last in the sequence or not |
The prefix item
indicates the name of the loop variable.
for and item_has_next¶
Suppose you have the following SQL template:
select * from employee where
/*%for name : names */
employee_name like /* name */'hoge'
/*%if name_has_next */
/*# "or" */
/*%end */
/*%end*/
If the sequence names
contains three items,
the generated SQL statement is as follows:
select * from employee where
employee_name like ?
or
employee_name like ?
or
employee_name like ?
Removal of WHERE and HAVING clauses on loop directive¶
WHERE and HAVING clauses can be unnecessary on loop directive. Those clauses are removed automatically.
Suppose you have the following SQL template:
select * from employee where
/*%for name : names */
employee_name like /* name */'hoge'
/*%if name_has_next */
/*# "or" */
/*%end */
/*%end*/
If the sequence names
is empty,
the generated SQL statement is as follows:
select * from employee
Because the SQL clause where
followed by /*%for ...*/
is unnecessary,
it is removed automatically.
Removal of AND and OR keywords on loop directive¶
AND and OR keywords can be unnecessary on loop directive. Those keywords are removed automatically.
Suppose you have the following SQL template:
select * from employee where
/*%for name : names */
employee_name like /* name */'hoge'
/*%if name_has_next */
/*# "or" */
/*%end */
/*%end*/
or
salary > 1000
If the sequence names
is empty,
the generated SQL statement is as follows:
select * from employee where salary > 1000
Because the SQL keyword or
following /*%end*/
is unnecessary,
it is removed automatically.
Restriction on loop directive¶
/*%for ...*/
and /*%end*/
must be included in
same SQL clause and in same statement level.
See also Restriction on condition directive.
Expansion directive¶
Expansion directive allows you to build column list of SELECT clause from the definition of Entity classes.
Synopsis¶
/*%expand alias*/
The expression alias
is optional.
If it is specified, it must be evaluated to java.lang.String
.
The directive must be followed by the asterisk *
.
expand¶
Suppose you have the following SQL template and the entity class mapped to the template:
select /*%expand*/* from employee
@Entity
public class Employee {
Integer id;
String name;
Integer age;
}
The generated SQL statement is as follows:
select id, name, age from employee
If you specify an alias to the table, specify same alias to the expansion directive:
select /*%expand "e" */* from employee e
The generated SQL statement is as follows:
select e.id, e.name, e.age from employee e
Population directive¶
Population directive allows you to build column list of UPDATE SET clause from the definition of Entity classes.
Synopsis¶
/*%populate*/
populate¶
Suppose you have the following SQL template and the entity class mapped to the template:
update employee set /*%populate*/ id = id where age < 30
@Entity
public class Employee {
Integer id;
String name;
Integer age;
}
The generated SQL statement is as follows:
update employee set id = ?, name = ?, age = ? where age < 30
Comments¶
This section show you how to distinguish between directives and normal SQL comments.
Single line comment¶
Always the string consisting of two hyphens --
is a single line comment.
It is never directive.
Multi line comment¶
If the character following /*
is not permissible as the first character in a Java identifier
and it is neither %
, #
, @
, "
nor '
,
the /*
is beginning of a multi line comment.
The followings are the beginning of a multi line comment:
- /**…*/
- /*+…*/
- /*=…*/
- /*:…*/
- /*;…*/
- /*(…*/
- /*)…*/
- /*&…*/
In other hand, the followings are the beginning of a directive:
- /* …*/
- /*a…*/
- /*$…*/
- /*@…*/
- /*”…*/
- /*’…*/
- /*#…*/
- /*%…*/
Note
We recommend you always use /**...*/
to begin multi line comments because it is simple.
Expression language¶
Contents
You can write simple expressions in directives of SQL templates. The grammar is almost the same as Java. However, not everything is possible that Java can do.
Note
Especially, the big difference is how to use optional types like java.util.Optional
.
In the expression, a value of Optional
type is always converted
to a value of the element type automatically.
For example a value of the Optional<String>
type is treated as a value of String
type.
Therefore, we can’t call methods of Optional
type,
nor do we call methods which have an Optional
type in the parameters.
When you want to check existence of a value, use /*%if optional != null */
instead of /*%if optional.isPresent() */
.
The same is true for java.util.OptionalInt
, java.util.OptionalDouble
,
and java.util.OptionalLong
.
Literals¶
You can use the following literals:
Literal | Type |
---|---|
null | void |
true | boolean |
false | boolean |
10 | int |
10L | long |
0.123F | float |
0.123D | double |
0.123B | java.math.BigDecimal |
‘a’ | char |
“a” | java.lang.String |
The numeral types are distinguished by suffix letters such as L
or F
at the end of the literals. The suffixes must be capital letters.
select * from employee where
/*%if employeeName != null && employeeName.length() > 10 */
employee_name = /* employeeName */'smith'
/*%end*/
Comparison operators¶
You can use the following comparison operators:
Operator | Description |
---|---|
== | Equal to operator |
!= | Not equal to operator |
< | Less than operator |
<= | Less than or equal to operator |
> | Greater than operator |
>= | Greater than or equal to operator |
To use comparison operators, operands must implement java.lang.Comparable
.
The operands for <
, <=
, >
and >=
must not be null
.
select * from employee where
/*%if employeeName.indexOf("s") > -1 */
employee_name = /* employeeName */'smith'
/*%end*/
Logical operators¶
You can use the following logical operators:
Operator | Description |
---|---|
! | Logical complement operator |
&& | Conditional-AND operator |
|| | Conditional-OR operator |
With parentheses, you can override the precedence of operators.
select * from employee where
/*%if (departmentId == null || managerId == null) and employee_name != null */
employee_name = /* employeeName */'smith'
/*%end*/
Arithmetic operators¶
You can use the following arithmetic operators:
Operator | Description |
---|---|
+ | Additive operator |
- | Subtraction operator |
* | Multiplication operator |
/ | Division operator |
% | Remainder operator |
Operands must be numeric type.
select * from employee where
salary = /* salary + 1000 */0
String concatenation operator¶
You can concatenate characters using a concatenation operator +
.
The operand must be one of the following types:
- java.lang.String
- java.lang.Character
- char
select * from employee where
employee_name like /* employeeName + "_" */'smith'
Calling instance methods¶
You can call instance methods with the method names separated by dots .
.
The method visibility must be public.
select * from employee where
/*%if employeeName.startsWith("s") */
employee_name = /* employeeName */'smith'
/*%end*/
If the method has no argument, specify ()
after the method name.
select * from employee where
/*%if employeeName.length() > 10 */
employee_name = /* employeeName */'smith'
/*%end*/
Accessing to instance fields¶
You can access instance fields with the field names separated by dots .
.
Even if the visibility is private, you can access it.
select * from employee where
employee_name = /* employee.employeeName */'smith'
Calling static methods¶
You can call static methods by continuing the method names
with the fully qualified class names enclosed in @
.
The method visibility must be public.
select * from employee where
/*%if @java.util.regex.Pattern@matches("^[a-z]*$", employeeName) */
employee_name = /* employeeName */'smith'
/*%end*/
Accessing to static fields¶
You can access static fields by continuing the field name
with the fully qualified class name enclosed in @
.
Even if the visibility is private, you can access it.
select * from employee where
/*%if employeeName.length() < @java.lang.Byte@MAX_VALUE */
employee_name = /* employeeName */'smith'
/*%end*/
Using built-in functions¶
Built-in functions are utilities mainly for changing values of binding variables before binding them to SQL.
For example, when you run a prefix search with a LIKE clause, you can write like this:
select * from employee where
employee_name like /* @prefix(employee.employeeName) */'smith' escape '$'
@prefix(employee.employeeName)
means that we pass employee.employeeName
to the @prefix
function.
The @prefix
function converts the character sequence which is received by the parameter
to a string for forward match search.
It also escapes special characters.
For example, if the value of employee.employeeName
is ABC
, it’s converted to ABC%
.
If the value of employee.employeeName
contains %
such as AB%C
,
the %
is escaped with a default escape sequence $
,
therefore the value is converted to AB$%C%
.
You can use following function signatures:
- String @escape(CharSequence text, char escapeChar = ‘$’)
- Escapes the character sequence for LIKE operation.
The return value is a string which is a result of escaping the character sequence.
If
escapeChar
isn’t specified,$
is used as a default escape sequence. It returnsnull
if you passnull
as a parameter. - String @prefix(CharSequence prefix, char escapeChar = ‘$’)
- Converts the character sequence for prefix search.
The return value is a string which is a result of escaping the character sequence
and adding a wild card character at the end.
If
escapeChar
isn’t specified,$
is used as a default escape sequence. It returnsnull
if you passnull
as a parameter. - String @infix(CharSequence infix, char escapeChar = ‘$’)
- Converts the character sequence for infix search.
The return value is a string which is a result of escaping the character sequence
and adding wild card characters at the beginning and the end.
If
escapeChar
isn’t specified,$
is used as a default escape sequence. It returnsnull
if you passnull
as a parameter. - String @suffix(CharSequence suffix, char escapeChar = ‘$’)
- Converts the character sequence for suffix search.
The return value is a string which is a result of escaping the character sequence
and adding a wild card character at the beginning.
If
escapeChar
isn’t specified,$
is used as a default escape sequence. It returnsnull
if you passnull
as a parameter. - java.util.Date @roundDownTimePart(java.util.Date date)
- Rounds down the time part.
The return value is a new Date which is rounded down the time part.
It returns
null
if you passnull
as a parameter. - java.sql.Date @roundDownTimePart(java.sql.Date date)
- Rounds down the time part.
The return value is a new Date which is rounded down the time part.
It returns
null
if you passnull
as a parameter. - java.sql.Timestamp @roundDownTimePart(java.sql.Timestamp timestamp)
- Rounds down the time part.
The return value is a new Timestamp which is rounded down the time part.
It returns
null
if you passnull
as a parameter. - java.util.Date @roundUpTimePart(java.util.Date date)
- Rounds up the time part.
The return value is a new Date which is rounded up the time part.
It returns
null
if you passnull
as a parameter. - java.sql.Date @roundUpTimePart(java.sql.Date date)
- Rounds up the time part.
The return value is a new Date which is rounded up the time part.
It returns
null
if you passnull
as a parameter. - java.sql.Timestamp @roundUpTimePart(java.sql.Timestamp timestamp)
- Rounds up the time part.
The return value is a new Timestamp which is rounded up the time part.
It returns
null
if you passnull
as a parameter. - boolean @isEmpty(CharSequence charSequence)
- Returns
true
if the character sequence isnull
or the length is0
. - boolean @isNotEmpty(CharSequence charSequence)
- Returns
true
if the character sequence isn’tnull
and the length isn’t0
. - boolean @isBlank(CharSequence charSequence)
- Returns
true
only if the character sequence isnull
, the length is0
, or the sequence is formed with whitespaces only. - boolean @isNotBlank(CharSequence charSequence)
- Returns
true
if the character sequence isn’tnull
, the length isn’t0
, and the sequence isn’t formed with whitespaces only.
These functions are correspond to the methods of org.seasar.doma.expr.ExpressionFunctions
.
Using custom functions¶
You can define and use your own functions.
You need to follow these settings when you use custom functions which you define by yourself:
- The function is defined as a method of a class which implements
org.seasar.doma.expr.ExpressionFunctions
. - The method is a public instance method.
- The class is registered as an option in Annotation processing.
The key of the option is
doma.expr.functions
. - The instance of the class you create is used in an RDBMS dialect in your configuration class
(The implementations of RDBMS dialect provided by Doma can receive
ExpressionFunctions
in the constructor).
To call a custom function, add @
at the beginning of the function name like built-in functions.
For example, you can call myfunc
function like this:
select * from employee where
employee_name = /* @myfunc(employee.employeeName) */'smith'
Transaction¶
Doma supports local transaction. This document explains how to configure and use the local transaction.
If you want to use global transaction, use frameworks or application servers which support JTA (Java Transaction API).
Configuration¶
To use local transaction, these conditions are required:
- Return
LocalTransactionDataSource
fromgetDataSource
inConfig
- Generate
LocalTransactionManager
using theLocalTransactionDataSource
above in the constructor - Use the
LocalTransactionManager
above to control database access
There are several ways to generate and get the LocalTransactionManager
,
but the simplest way is to generate it in the constructor of Config
implementaion class
and make the Config
implementaiton class singleton.
Here is an example:
@SingletonConfig
public class AppConfig implements Config {
private static final AppConfig CONFIG = new AppConfig();
private final Dialect dialect;
private final LocalTransactionDataSource dataSource;
private final TransactionManager transactionManager;
private AppConfig() {
dialect = new H2Dialect();
dataSource = new LocalTransactionDataSource(
"jdbc:h2:mem:tutorial;DB_CLOSE_DELAY=-1", "sa", null);
transactionManager = new LocalTransactionManager(
dataSource.getLocalTransaction(getJdbcLogger()));
}
@Override
public Dialect getDialect() {
return dialect;
}
@Override
public DataSource getDataSource() {
return dataSource;
}
@Override
public TransactionManager getTransactionManager() {
return transactionManager;
}
public static AppConfig singleton() {
return CONFIG;
}
}
Note
The @SingletonConfig
shows that this class is a singleton class.
Usage¶
Let’s see examples on the condition that we use the following Dao interface annotated with
the AppConfig
class which we saw in the Configuration.
@Dao(config = AppConfig.class)
public interface EmployeeDao {
...
}
The dao
used in the code examples below are instances of this class.
Start and finish transactions¶
You can start a transaction with one of following methods of TransactionManager
:
- required
- requiresNew
- notSupported
Use a lambda expression to write a process which you want to run in a transaction.
TransactionManager tm = AppConfig.singleton().getTransactionManager();
tm.required(() -> {
Employee employee = dao.selectById(1);
employee.setName("hoge");
employee.setJobType(JobType.PRESIDENT);
dao.update(employee);
});
The transaction is committed if the lambda expression finishes successfully. The transaction is rolled back if the lambda expression throws an exception.
Explicit rollback¶
Besides throwing an exception, you can use setRollbackOnly
method to rollback a transaction.
TransactionManager tm = AppConfig.singleton().getTransactionManager();
tm.required(() -> {
Employee employee = dao.selectById(1);
employee.setName("hoge");
employee.setJobType(JobType.PRESIDENT);
dao.update(employee);
// Mark as rollback
tm.setRollbackOnly();
});
Savepoint¶
With a savepoint, you can cancel specific changes in a transaction.
TransactionManager tm = AppConfig.singleton().getTransactionManager();
tm.required(() -> {
// Search and update
Employee employee = dao.selectById(1);
employee.setName("hoge");
dao.update(employee);
// Create a savepoint
tm.setSavepoint("beforeDelete");
// Delete
dao.delete(employee);
// Rollback to the savepoint (cancel the deletion above)
tm.rollback("beforeDelete");
});
Building an application¶
Contents
Before You Start¶
Maven Central Repository¶
You can pull the jar file of the Doma framework from the Maven central repository. The group id and artifact id are as follows:
GroupId: | org.seasar.doma |
---|---|
ArtifactId: | doma |
See also: https://search.maven.org/artifact/org.seasar.doma/doma/
Build with Eclipse¶
Note
Instead of manual settings we show you below, we recommend to generate eclipse setting files automatically with the Gradle Eclipse Plugin. See also build.gradle and eclipse.gradle in the domaframework/simple-boilerplate repository.
Enabling annotation processing¶
- Select “Project > Properties” from the menu bar and open the dialog
- Select “Java Compiler > Annotation Processing” from the left menu of the dialog
- Check as follows:

Setting factory path¶
- Select “Project > Properties” from the menu bar and open the dialog
- Select “Java Compiler > Annotation Processing > Factory Path” from the left menu of the dialog
- Add the jar file of the Doma framework whose version is same as the one in the Java Build Path
- Check as follows:

Build with Gradle¶
build.gradle as an example:
apply plugin: 'java'
// Copy the resources referred by the Doma annotation processors to
// the destinationDir of the compileJava task
task copyDomaResources(type: Sync) {
from sourceSets.main.resources.srcDirs
into compileJava.destinationDir
include 'doma.compile.config'
include 'META-INF/**/*.sql'
include 'META-INF/**/*.script'
}
compileJava {
// Depend on the above task
dependsOn copyDomaResources
options.encoding = 'UTF-8'
}
compileTestJava {
options.encoding = 'UTF-8'
// Disable the annotation processors during the test run
options.compilerArgs = ['-proc:none']
}
dependencies {
annotationProcessor "org.seasar.doma:doma:2.22.0"
implementation "org.seasar.doma:doma:2.22.0"
}
repositories {
mavenCentral()
maven {url 'https://oss.sonatype.org/content/repositories/snapshots/'}
}
Note
The description maven {url 'https://oss.sonatype.org/content/repositories/snapshots/'}
is required
only when you need the SNAPSHOT version of the Doma framework.
Note
With the above build.gradle, you will benefits from Incremental annotation processing.
See also build.gradle in the domaframework/simple-boilerplate repository.
Annotation processing¶
Contents
Doma uses Pluggable Annotation Processing API at compile time.
In this document, we describe the options for the annotation processors in Doma and show you how to pass them to build tools.
Options¶
- doma.dao.package
- The package that the generated implementation classes of interfaces annotated with
@Dao
belong to. The specified value overrides the value of doma.dao.subpackage. The default value is the same package as the one the interfaces annotated with@Dao
belong to. - doma.dao.subpackage
- The subpackage that the generated implementation classes of interfaces annotated with
@Dao
belong to. The specified value is overridden by the value of doma.dao.package. If this value isimpl
and the package of interfaces annotated with@Dao
isexample.dao
, the generated implementation classes belong to the packageexample.dao.impl
. - doma.dao.suffix
- The name suffix that the generated implementation classes of interfaces annotated with
@Dao
have. If this value isBean
and the simple name of the interface annotated with@Dao
isEmployeeDao
, the simple name of the generated implementation class isEmployeeDaoBean
. The default value isImpl
. - doma.debug
- Whether to output the debug log in annotation processing.
If the value is
true
, the annotation processors output the debug log. The default value isfalse
. - doma.domain.converters
- The full qualified names of the classes annotated with
@DomainConverters
. The names are described as comma separated list. This value are used to find external domain classes. - doma.entity.field.prefix
- The name prefix that the fields of the generated entity meta classes have.
The value
none
means the prefix is not used. The default value is$
. - doma.expr.functions
- The full qualified name of the class that implements
org.seasar.doma.expr.ExpressionFunctions
. The default value isorg.seasar.doma.expr.ExpressionFunctions
. This value are used to determine which functions are available in expression comments. - doma.resources.dir
- The resource directory that contains the resource files such as a doma.compile.config file and sql files. Specify the value as an absolute path. If the value is not specified, the resource directory is same as the directory the classes are generated.
- doma.sql.validation
- Whether to validate the existence of sql files and the grammar of sql comments.
If the value is
true
, the validations run. To disable the validations, setfalse
. The default value istrue
. - doma.version.validation
- Whether to validate the versions of doma.jar between runtime and compile-time.
If the value is
true
, the validation runs. To disable the validation, setfalse
. The default value istrue
. - doma.config.path
- The file path of the configuration file for Doma.
The default value is
doma.compile.config
.
Setting options in Eclipse¶
- Select “Project > Properties” from the menu bar and open the dialog
- Select “Java Compiler > Annotation Processing” from the left menu of the dialog
- Add “Processor options”
Setting options in IntelliJ IDEA¶
- Select “Preferrences” from the menu bar and open the dialog
- Select “Build, Execution, Deployment > Compiler > Annotation Processors” from the left menu of the dialog
- Add “Annotation Processor options”
Setting options in Gradle¶
compileJava {
options {
compilerArgs = ['-Adoma.dao.subpackage=impl', '-Adoma.dao.suffix=Impl']
}
}
Setting options with configuration file¶
The options specified in the doma.compile.config
file are available in all build tools
such as Eclipse, IDEA, Gradle and so on.
The doma.compile.config
file must follow the properties file format
and be placed in the root directory such as src/main/resources
.
Note
The options specified in the doma.compile.config
file are overridden by
the ones specific to the build tools.
Lombok support¶
Contents
Doma supports Lombok 1.16.12 or above.
Note
If you intend to use Eclipse, use version 4.5 or above.
Overview¶
Both Lombok and Doma provide their own annotation processors. Because the execution order of annotation processors is not determined in Java, the Doma’s annotation processors are not always aware of the class modifications made by the Lombok annotation processors.
To resolve the issue, the Doma’s annotation processors recognise several Lombok’s annotations
and change their own behavior.
For example, if the Doma’s annotation processors find a class annotated with @lombok.Value
,
they supposes that the class has a constructor whose arguments covers all instance fields.
Best practices¶
We show you recommended ways to define classes with Lombok annotations.
Entity class definition¶
immutable entity classes¶
- Specify
true
to theimmutable
element of@Entity
- Specify either
@lombok.Value
or@lombok.AllArgsConstructor
- Specify
@lombok.Getter
to generate getters, in case of@lombok.AllArgsConstructor
@Entity(immutable = true)
@Value
public class Employee {
@Id
Integer id;
String name;
Age age;
}
@Entity(immutable = true)
@AllArgsConstructor
@Getter
public class Worker {
@Id
private final Integer id;
private final String name;
private final Age age;
}
mutable entity classes¶
- Define a default constructor
- Specify
@lombok.Data
or@lombok.Getter
/@lombok.Setter
to generate getters/setters
@Entity
@Data
public class Person {
@Id
private Integer id;
private String name;
private Age age;
}
@Entity
@Getter
@Setter
public class Businessman {
@Id
private Integer id;
private String name;
private Age age;
}
Domain class definition¶
- Specify
@lombok.Value
- Define only one instance field whose name is
value
@Domain(valueType = Integer.class)
@Value
public class Age {
Integer value;
}
Embeddable class definition¶
- Specify either
@lombok.Value
or@lombok.AllArgsConstructor
- Specify
@lombok.Getter
to generate getters, in case of@lombok.AllArgsConstructor
@Embeddable
@Value
public class Address {
String street;
String city;
}
@Embeddable
@AllArgsConstructor
@Getter
public class Location {
private final String street;
private final String city;
}
Kotlin support¶
Contents
Doma supports Kotlin 1.3.11 or above experimentally.
Best practices¶
We show you recommended ways to define classes and build them with Kotlin.
Entity classes¶
- Define as a data class
- Specify
true
to theimmutable
element of@Entity
- Define only one constructor
- Define properties only in the constructor
- Use val for the property definitions
@Entity(immutable = true)
data class Person(
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
val id: Int? = null,
val name: Name,
val address: Address)
Domain classes¶
- Define as a data class
- Define only one constructor
- Define only one property whose name is
value
in the constructor - Use val for the property definition
@Domain(valueType = String::class)
data class Name(val value: String)
Embeddable classes¶
- Define as a data class
- Define only one constructor
- Define properties only in the constructor
- Use val for the property definitions
@Embeddable
data class Address(val city: String, val street: String)
Dao interfaces¶
- Specify a SQL template to
@org.seasar.doma.experimental.Sql
- Use
org.seasar.doma.jdbc.Result
as the return type of@Delete
,@Insert
and@Update
- Use
org.seasar.doma.jdbc.BatchResult
as the return type of@BatchDelete
,@BatchInsert
and@BatchUpdate
@Dao(config = AppConfig::class)
interface PersonDao {
@Sql("""
select * from person where id = /*id*/0
""")
@Select
fun selectById(id: Int): Person
@Insert
Result<Person> insert(Person person);
}
- Use Destructuring Declarations
for
org.seasar.doma.jdbc.Result
andorg.seasar.doma.jdbc.BatchResult
val dao: PersonDao = ...
val person = Person(name = Name("Jhon"), address = Address(city = "Tokyo", street = "Yaesu"))
val (newPerson, count) = dao.insert(person)
Using kapt in Gradle¶
Annotation processors are supported in Kotlin with the kapt compiler plugin.
Add the dependencies using the kapt and implementation configuration in your dependencies block:
dependencies {
kapt "org.seasar.doma:doma:2.22.0"
implementation "org.seasar.doma:doma:2.22.0"
}
If you use resource files such as SQL files, make the kapt find them:
kapt {
arguments {
arg("doma.resources.dir", compileKotlin.destinationDir)
}
}
task copyDomaResources(type: Sync) {
from sourceSets.main.resources.srcDirs
into compileKotlin.destinationDir
include 'doma.compile.config'
include 'META-INF/**/*.sql'
include 'META-INF/**/*.script'
}
compileKotlin {
dependsOn copyDomaResources
}
Note
Remember that you always have options as follows:
- Write all codes with Kotlin
- Write all codes with Java
- Write codes annotated with Doma’s annotations in Java and others in Kotlin
The third option is worth considering, because it can avoid some troubles with the kapt.
Developer Documentation¶
Development of Doma¶
Contents
Note
This document is written for the developers of Doma.
Before You Start¶
To build you will need Git and JDK 8.
Be sure that your JAVA_HOME
environment variable points to the jdk1.8.0
folder extracted from the JDK download.
Get the Source Code¶
$ git clone https://github.com/domaframework/doma.git
$ cd doma
Build from the Command Line¶
$ ./gradlew build
Format the Source Code¶
We use google-java-format 1.6 for code formatting.
Continuous Integration¶
We use Travis CI for CI. All pull requests to master brunch are tested on Travis CI.
Documents¶
We use Sphinx to generate documents. To use Sphinx you will need Python.
Set up an environment¶
$ cd docs
$ pip install -r requirements.txt
Generate HTML files¶
Execute the sphinx-autobuild command in the docs
directory:
$ sphinx-autobuild . _build/html
Visit the webpage served at http://127.0.0.1:8000.
About Doma¶
Release Notes¶
v2.22.0: 2019-01-20¶
v2.21.0: 2019-01-06¶
- GH267 Change the description of Doma
- GH266 Remove descriptions of Doma 3
- GH265 Polish annotation processing
- GH264 Improve build script example
- GH263 Translate resource bundle messages from Japanese into English
- GH262 Translate javadoc comments from Japanese into English
- GH261 Change the sphinx langulage option
- GH260 Translate development.rst
- GH259 Format with google-java-format 1.6
- GH258 Translate docs from Japanese into English
- GH257 Use Sphinx 1.8.2
v2.20.0: 2018-12-23¶
- GH255 Support Gradle incremental annotation processing
- GH254 Specify supported Java versions
- GH253 Explain how to write gradle build script in Gradle 5.0
- GH252 Polish Gradle configuration
- GH251 Use Gradle 5.0
- GH248 Fix mistake in abbreviation for JRE
- GH247 Load a class with Class.forName when the context class loader fails to load the class
- GH245 Revise wrong descriptions about @Update.excludeNull, @BatchUpdate.include and @BatchUpdate.exclude
v2.19.3: 2018-09-02¶
v2.19.1: 2018-01-08¶
v2.19.0: 2017-11-19¶
v2.18.0: 2017-10-28¶
v2.17.0: 2017-09-09¶
v2.16.1: 2017-05-14¶
v2.15.0: 2017-02-05¶
v2.14.0: 2017-01-14¶
- GH183 BatchUpdateExecutor, BatchDeleteExecutor, MapBatchInsertBuilder を追加
- GH182 エンベッダブルクラスにプロパティを定義しない場合に生成されるコードがコンパイルエラーになっていたのを修正
- GH181 SQLテンプレートで組み立てられたSQLを扱うための @SqlProcessor を追加
- GH180 Lombok をサポート
- GH179 StandardExpressionFunctions#escapeがescapeCharを使用していない
- GH177 Kotlin 1.0.6対応
- GH176 BatchInsertExecutorを追加
- GH175 組み込み関数の LocalDate, LocalDateTime 対応
- GH174 Mapをパラメータとして自動でInsert文を組み立てるMapInsertBuilderを追加
v2.12.0: 2016-07-14¶
v2.11.0: 2016-06-18¶
v2.10.0: 2016-05-28¶
- GH146 Embeddable なオブジェクトが null の場合に更新系の処理が失敗する不具合を修正
- GH145 Kotlin のサポートについてドキュメントを追加
- GH142 エンベッダブルクラスのドキュメントを追加
- GH141 エンティティクラスが継承をしている場合の親プロパティへのアクセス方法を簡易化
- GH140 プリミティブ型のプロパティにnullがアサインされる場合に例外が発生していた不具合をデフォルト値が設定されるように修正
- GH139 @Embeddable をサポート
- GH138 Kotlin でイミュータブルなエンティティを定義する際 @ParameterName を使用する必要性を除去
v2.9.0: 2016-05-16¶
v2.7.0: 2016-02-27¶
v2.6.2: 2016-02-11¶
v2.6.0: 2015-11-21¶
v2.4.0: 2015-08-14¶
- GH93 JdbcMappingHint#getDomainClass() がドメインクラスを返さない問題を修正
- GH89 PortableObjectTypeをジェネリクスにして、String等をvalueTypeとして指定できるように
- GH88 JdbcLoggerのメソッドのtypoを修正。 Failuer -> Failure
- GH87 StandardExpressionFunctionsのサブクラスにpublicなコンストラクタを追加
- GH86 Version number spec is different from the document
- GH84 populate を使ったメソッドで DOMA4122 が出る問題を修正
- GH81 リソースバンドルが取得できない場合はデフォルトのリソースバンドルにフォールバックする
v2.2.0: 2015-03-28¶
- GH71 インターフェースにも@Domainで注釈できるようにしました
- GH70 EntityListenerの取得はEntityListenerProviderを介するようにしました
- GH67 SQL Server の OPTION 句が存在するとページングが正しく実行されない問題を修正しました
- GH66 ネーミング規約の適用をコンパイル時から実行時に変更
- GH64 イミュータブルなエンティティの取得でNullPointerException が発生するバグを修正しました
- GH61 SQL Server 2012 から追加された OFFSET-FETCH をページング処理に使う
- GH60 Mssql2008Dialect の getName() が返す値を変更しました
- GH59 Windows環境でテストが失敗する問題を修正
- GH58 StringUtilのfromCamelCaseToSnakeCaseで、カラム名に数字が含まれている場合意図している結果にならない
v2.1.0: 2014-12-30¶
v2.0.1: 2014-08-06¶
DomainConverter
の第2型引数にbyte[]
を指定すると注釈処理でコンパイル エラーになる問題を修正しました
v2.0.0: 2014-07-02¶
UnitOfWork
を削除しました
v2.0-beta-5: 2014-06-07¶
List<Optional<Emp>>
やList<Optional<Map<String, Object>>>
を戻り値とする Dao メソッドは注釈処理でコンパイルエラーにしました- Entity 更新後に OriginalStates へ変更が反映されない問題を修正しました
- エンティティの識別子の値がすでに設定されている場合は自動生成処理を実行しないようにしました
- カラムリスト展開コメント で DOMA4257 エラーになる問題を修正しました
- SQLのログ出力方法をアノテーションで制御できるようにしました
- Dao から出力されるログのメッセージを詳細化しました
UtilLoggingJdbcLogger
のロガーの名前をクラスの完全修飾名に変更しました- SQL実行時にSQLファイルのパスがログに出力されない問題を修正しました
v2.0-beta-4: 2014-05-04¶
- Pluggable Annotation Processing API の Visitor を Java 8 用のものへバージョンアップしました
- 空の
java.util.Iterable
を IN 句にバインドする場合は SQL のnull
として扱うようにしました java.sql.SQLXML
に対応しましたLocalTransaction
で指定したセーブポイント「以降」を削除すべき箇所で「以前」を削除している不具合を修正しましたLocalTransaction
でセーブポイント削除時のログが間違っている不具合を修正しました- Entity のプロパティの型を byte 配列にすると注釈処理に失敗する不具合を修正しました
v2.0-beta-3: 2014-04-03¶
- 検索結果を
java.util.stream.Collector
で処理できるようにしました。 LocalTransactionManager
からTransactionManager
インタフェースを抽出しました。Config
で指定した設定が一部無視される不具合を修正しました。- マップのネーミング規約を一律制御するためのインタフェース
MapKeyNaming
を追加しました。 java.time.LocalDate
、java.time.LocalTime
、java.time.LocalDateTime
を基本型として使用できるようにしました。JdbcLogger
の実装の差し替えを容易にするためにAbstractJdbcLogger
を追加しました。SelectStrategyType
の名前をSelectType
に変更しました。
Frequently Asked Questions¶
Contents
- Frequently Asked Questions
- General Questions
- Runtime environment
- Development environment
- Which version of JDK does Doma support?
- Which IDE do you recommend?
- In Eclipse, the jar file of Doma is added to the Java Build Path but annotation processing doesn’t run.
- Where are generated source files in annotation processing?
- In Eclipse, where is the .apt_generated directory?
- I get the message that the sql file is not found, but it exists.
- Features as the database access library
General Questions¶
What does Doma stand for?¶
Doma stands for the D ao O riented database MA pping framework.
What is annotation processing?¶
Annotation processing, that is introduced in Java 6, allows us to validate and generate source codes at compile time.
We use annotation processing to do followings:
- Generating meta classes from the classes annotated with
@Entity
and@Domain
- Generating implementation classes of the interfaces annotated with
@Dao
- Validating SQL statements
Runtime environment¶
Which version of JRE does Doma support?¶
JRE 8、9、10 and 11.
Development environment¶
Which version of JDK does Doma support?¶
JDK 8、9、10 and 11.
Which IDE do you recommend?¶
We recommend Eclipse and IntelliJ IDEA.
In Eclipse, the jar file of Doma is added to the Java Build Path but annotation processing doesn’t run.¶
Enable annotation processing and add the jar file to the Factory Path too. See also Build with Eclipse.
Where are generated source files in annotation processing?¶
In Eclipse, they are found in the .apt_generated directory.
In Eclipse, where is the .apt_generated directory?¶
You can find it in the Nivigator view.
I get the message that the sql file is not found, but it exists.¶
You may get the following message, though the file exists:
[DOMA4019] The file[META-INF/../select.sql] is not found from the classpath
When you use Eclipse, check that the location of the output folder of resources is same as the one of class files in the Java Build Path dialog.
When you use Gradle, check that the resource files are copied to compileJava.destinationDir
in advance of the compileJava task. See also Build with Gradle.
Features as the database access library¶
Does Doma generate SQL statements?¶
Yes, Doma generates following statements:
- INSERT
- DELETE
- UPDATE
- stored procedure call
- stored function call
Doma doesn’t generate SELECT statements but executes arbitrary SELECT statements and maps the results to the Java objects.
See also Queries for detail information.
How dynamic SQL statements are executed?¶
The dynamic SQL statements are built by directives that are represented as the SQL comments.
See also SQL templates for detail information.
Does Doma map database relationships such as one-to-one and one-to-many to Java objects?¶
No.
Doma only maps each row of the SQL result set to a Java entity instance.
Does Doma provide a JDBC connection pooling feature?¶
No.
Use Doma together with the JDBC connection pool library such as HikariCP.