
Welcome to Doma 2¶
Doma 2 is a database access framework for Java 8+. Doma has various strengths:
- Verifies and generates source code at compile time using annotation processing.
- Maps database columns to user-defined Java objects.
- Uses SQL templates, called “two-way SQL”.
- Supports classes introduced in Java 8, such as
java.time.LocalDate
,java.util.Optional
, andjava.util.stream.Stream
. - Has no dependence on other libraries.
This document consists of following sections:
Help improve our documentation
Have feedback or a question? Found a typo?
Please add an issue or pull request to https://github.com/domaframework/doma and we’ll get back to you.
User Documentation¶
Get started!(IntelliJ IDEA)¶
Contents
Summary¶
We introduce how to set up development environment and how to execute basic database access.
Install JDK¶
Install JDK 8 .
Note
Doma supports JDK 8 and later. See also Which version of JDK does Doma support?.
Install IntelliJ IDEA¶
Install IntelliJ IDEA Community Edition .
Note
In this document, we use IntelliJ IDEA 2019.3.4 (Community Edition). We recommend you use a newer version because the integration with Gradle is much better. If you use IntelliJ IDEA Ultimate Edition, consider to use IntelliJ Doma support plugin.
Import template project¶
Clone simple-boilerplate from GitHub:
$ git clone https://github.com/domaframework/simple-boilerplate.git
Start IntelliJ IDEA and execute Import Project, and select the cloned simple-boilerplate:

Check Import project from external model and select Gradle:

Push the Finish button. If the message that is synced successfully is shown at Build tool window then import is success.
Structure of template project¶
The project source code’s structure is like next.
─ src
├── main
│ ├── java
│ │ └── boilerplate
│ │ ├── AppConfig.java
│ │ ├── dao
│ │ │ ├── AppDao.java
│ │ │ └── EmployeeDao.java
│ │ └── entity
│ │ └── Employee.java
│ └── resources
│ └── META-INF
│ └── boilerplate
│ └── dao
│ ├── AppDao
│ │ ├── create.script
│ │ └── drop.script
│ └── EmployeeDao
│ ├── selectAll.sql
│ └── selectById.sql
└── test
├── java
│ └── boilerplate
│ ├── DbResource.java
│ └── dao
│ └── EmployeeDaoTest.java
└── resources
Explain about important file.
- AppConfig.java
- The Configuration that is needed for executing Doma.
- AppDao.java
- Utility that create/drop the database schema that is using in this application.
This is not need in production environment.
The script file is under
META-INF/boilerplate/dao/AppDao/
and is used for creating and dropping schema. - Employee.java
- The Entity classes that correspond to EMPLOYEE table within database.
- EmployeeDao.java
- The Dao interfaces that is execute getting and updating
Employee
class. The SQL file is underMETA-INF/boilerplate/dao/EmployeeDao/
and is used. - EmployeeDaoTest.java
- The test that is using
EmployeeDao
. You can learn about Doma by adding test case to this file. Other test is not affected by updating data because database schema is created and disposed per test method.
SQL file¶
You open META-INF/boilerplate/dao/EmployeeDao/selectById.sql
file.
This file is described like next.
select
/*%expand*/*
from
employee
where
id = /* id */0
The /*%expand*/
show that expansioning column list by referencing entity class that is mapped at Java method.
The /* id */
show that Java method parameter value is binding to this SQL.
The 0
that is placed at behind is test data.
By including this test data, you can confirm easily that there is not mistake in SQL at executing by tool.
Test data is not used at executing Java program.
About detail you reference SQL templates.
Search¶
You call Dao method that is annotated @Select
for executing Search process.
Add searching process¶
Show how to adding process that searching young employee than arbitrary age.
You add next program code to EmployeeDao
and execute build.
@Select
List<Employee> selectByAge(Integer age);
At this time, next error message is shown on Message tool window by annotation process.
[DOMA4019] The file[META-INF/boilerplate/dao/EmployeeDao/selectByAge.sql] is is not found from the classpath.
You create file that name is selectByAge.sql to under src/main/resources/META-INF/boilerplate/dao/EmployeeDao
and rebuild by keeping empty file.
Error message content will change.
[DOMA4020] The SQL template is empty. PATH=[META-INF/boilerplate/dao/EmployeeDao/selectByAge.sql].
You back to selectByAge.sql
file and describe next SQL.
select
/*%expand*/*
from
employee
where
age < /* age */0
You rebuild then error is resolved.
Execute searching process¶
Actually execute the created searching process at the above.
You add next code to EmployeeDaoTest
.
@Test
public void testSelectByAge() {
TransactionManager tm = AppConfig.singleton().getTransactionManager();
tm.required(() -> {
List<Employee> employees = dao.selectByAge(35);
assertEquals(2, employees.size());
});
}
You execute JUnit and confirm that this code is run.
At that time, created for the searching SQL is next.
select
age, id, name, version
from
employee
where
age < 35
Insert¶
For executing Insert process, you call Dao method that is annotated @Insert
annotation.
Execute insert process¶
You confirm that next code is exists at EmployeeDao
.
@Insert
int insert(Employee employee);
Execute insert process by using this code.
You add next code to EmployeeDaoTest
.
@Test
public void testInsert() {
TransactionManager tm = AppConfig.singleton().getTransactionManager();
Employee employee = new Employee();
// First transaction
// Execute inserting
tm.required(() -> {
employee.name = "HOGE";
employee.age = 20;
dao.insert(employee);
assertNotNull(employee.id);
});
// Second transaction
// Confirm that inserting is success
tm.required(() -> {
Employee employee2 = dao.selectById(employee.id);
assertEquals("HOGE", employee2.name);
assertEquals(Integer.valueOf(20), employee2.age);
assertEquals(Integer.valueOf(1), employee2.version);
});
}
You execute JUnit and confirm that this code is run.
At that time, created for the inserting SQL is next.
insert into Employee (age, id, name, version) values (20, 100, 'HOGE', 1)
Identifier and version number is automatically setting.
Update¶
For executing Update process, you call Dao method that is annotated @Update
annotation.
Execute update process¶
You confirm that next code is exists at EmployeeDao
.
@Update
int update(Employee employee);
Execute update process by using this code.
You add next code to EmployeeDaoTest
.
@Test
public void testUpdate() {
TransactionManager tm = AppConfig.singleton().getTransactionManager();
// First transaction
// Search and update age field
tm.required(() -> {
Employee employee = dao.selectById(1);
assertEquals("ALLEN", employee.name);
assertEquals(Integer.valueOf(30), employee.age);
assertEquals(Integer.valueOf(0), employee.version);
employee.age = 50;
dao.update(employee);
assertEquals(Integer.valueOf(1), employee.version);
});
// Second transaction
// Confirm that updating is success
tm.required(() -> {
Employee employee = dao.selectById(1);
assertEquals("ALLEN", employee.name);
assertEquals(Integer.valueOf(50), employee.age);
assertEquals(Integer.valueOf(1), employee.version);
});
}
You execute JUnit and confirm that this code is run.
At that time, created for the updating SQL is next.
update Employee set age = 50, name = 'ALLEN', version = 0 + 1 where id = 1 and version = 0
The version number that is for optimistic concurrency control is automatically increment.
Delete¶
For executing Delete process, you call Dao method that is annotated @Delete
annotation.
Execute delete process¶
You confirm that next code is exists at EmployeeDao
.
@Delete
int delete(Employee employee);
Execute delete process by using this code.
You add next code to EmployeeDaoTest
.
@Test
public void testDelete() {
TransactionManager tm = AppConfig.singleton().getTransactionManager();
// First transaction
// Execute deleting
tm.required(() -> {
Employee employee = dao.selectById(1);
dao.delete(employee);
});
// Second transaction
// Confirm that deleting is success
tm.required(() -> {
Employee employee = dao.selectById(1);
assertNull(employee);
});
}
You execute JUnit and confirm that this code is run.
At that time, created for the deleting SQL is next.
delete from Employee where id = 1 and version = 0
Identifier and version number is specified in search condition.
Get started!(Eclipse)¶
Contents
Summary¶
Introduce how to setting up development environment and how to executing basic database access.
Install JDK¶
You install JDK 8 .
Note
Doma supports JDK 8 and later. See also Which version of JDK does Doma support?.
Install Eclipse¶
You install Eclipse .
Note
Running on Eclipse IDE for Java EE Developers and so on other but checking of running by Eclipse Standard 4.4 in this document. It seems to that it is running at higher version.
Install Doma Tools that is Eclipse plugin¶
Doma tool is plugin that enable mutual transition between Java file and SQL file. This plugin is not required to using Doma, but if you use this plugin then productivity is growth.
You select Help > Install New Software… from menu bar and input next url to ‘Work With’ textbox.
http://dl.bintray.com/domaframework/eclipse/
Install enable plugin candidate is shown like below then you check to Doma tool latest version and go on dialog to finish installing.

Associate to file¶
Doma tools execute annotation processing by hook the file updating. In order to do that , you need to open SQL file in Eclipse.
You select Eclipse > Preference… or Window > Preference from menu bar and open preference dialog.
You associate file that has .sql
extensions to Text Editor like shown below figure.

Similarly you associate file that has .script
extensions to Text Editor.

Note
You can skip this setting if you use Eclipse IDE for Java EE Developers because SQL file is associated to specialized editor by default.
Note
We recommend to you development style that you create SQL by RDBMS specific tools (Oracle SQL Developer and pgAdmin) and copy accomplished SQL to Eclipse editor.
Import template project¶
You clone simple-boilerplate from GitHub.
$ git clone https://github.com/domaframework/simple-boilerplate.git
Move to the cloned directory.
$ cd simple-boilerplate
Create config file for Eclipse by next command.
$ ./gradlew eclipse
Note
You input gradlew eclipse
instead of ./gradlew eclipse
in Windows environment.
Note
Please set JDK 8 (or later) installed directory to environment variable JAVA_HOME
.
It is needed for executing gradlew.
Note
The config that is for annotation processing config is included in Eclipse config file. Reference Build with Eclipse if configure by manual.
You select File > Import… from Eclipse menu bar and select ‘Existing Projects into Workspace’ and import simple-boilerplate.

You select project and execute JUnit for confirming the accomplished the importing. If one test case is success then importing was finished normally.
Structure of template project¶
The project source code’s structure is like next.
─ src
├── main
│ ├── java
│ │ └── boilerplate
│ │ ├── AppConfig.java
│ │ ├── dao
│ │ │ ├── AppDao.java
│ │ │ └── EmployeeDao.java
│ │ └── entity
│ │ └── Employee.java
│ └── resources
│ └── META-INF
│ └── boilerplate
│ └── dao
│ ├── AppDao
│ │ ├── create.script
│ │ └── drop.script
│ └── EmployeeDao
│ ├── selectAll.sql
│ └── selectById.sql
└── test
├── java
│ └── boilerplate
│ ├── DbResource.java
│ └── dao
│ └── EmployeeDaoTest.java
└── resources
Explain about important file.
- AppConfig.java
- The Configuration that is needed for executing Doma.
- AppDao.java
- Utility that create/drop the database schema that is using in this application.
This is not need in production environment.
The script file is under
META-INF/boilerplate/dao/AppDao/
and is used for creating and dropping schema. - Employee.java
- The Entity classes that correspond to EMPLOYEE table within database.
- EmployeeDao.java
- The Dao interfaces that is execute getting and updating
Employee
class. The SQL file is underMETA-INF/boilerplate/dao/EmployeeDao/
and is used. - EmployeeDaoTest.java
- The test that is using
EmployeeDao
. You can learn about Doma by adding test case to this file. Other test is not affected by updating data because database schema is created and disposed per test method.
Mutual transition between Java file and SQL file¶
EmployeeDao.java
is defined like next.
@Dao(config = AppConfig.class)
public interface EmployeeDao {
@Select
List<Employee> selectAll();
@Select
Employee selectById(Integer id);
@Insert
int insert(Employee employee);
@Update
int update(Employee employee);
@Delete
int delete(Employee employee);
}
You move cursor to selectById
method and do right click at Eclipse editor and show context menu.
You can transition to META-INF/boilerplate/dao/EmployeeDao/selectById.sql
file by selecting Doma > Jum to SQL in menu.
Next, you put cursor to arbitrary place in META-INF/boilerplate/dao/EmployeeDao/selectById.sql
file and show context menu.
You can back to EmployeeDao.java
file by selecting Doma > Jump to Java in menu.
SQL File¶
You open META-INF/boilerplate/dao/EmployeeDao/selectById.sql
file.
This file is described like next.
select
/*%expand*/*
from
employee
where
id = /* id */0
The /*%expand*/
show that expansioning column list by referencing entity class that is mapped at Java method.
The /* id */
show that Java method parameter value is binding to this SQL.
The 0
that is placed at behind is test data.
By including this test data, you can confirm easily that there is not mistake in SQL at executing by tool.
Test data is not used at executing Java program.
About detail you reference SQL templates.
Search¶
You call Dao method that is annotated @Select
for executing Search process.
Add searching process¶
Show how to adding process that searching young employee than arbitrary age.
You add next program code to EmployeeDao
.
@Select
List<Employee> selectByAge(Integer age);
At this time, next error message is shown on Eclipse by annotation process.
[DOMA4019] The file[META-INF/boilerplate/dao/EmployeeDao/selectByAge.sql] is is not found from the classpath.
You move cursor to selectByAge
method and show context menu by doing right click,
and you select Doma > Jump to SQL in menu.
The dialog that is for creating SQL file is show like next.

You push ‘Finish’ and create file.
After creating file, you save the file that state is empty and back to EmployeeDao
then error message is changed.
[DOMA4020] The SQL template is empty. PATH=[META-INF/boilerplate/dao/EmployeeDao/selectByAge.sql].
You back to selectByAge.sql
file and describe next SQL.
select
/*%expand*/*
from
employee
where
age < /* age */0
Then error is resolved.
Execute searching process¶
Actually execute the created searching process at the above.
You add next code to EmployeeDaoTest
.
@Test
public void testSelectByAge() {
TransactionManager tm = AppConfig.singleton().getTransactionManager();
tm.required(() -> {
List<Employee> employees = dao.selectByAge(35);
assertEquals(2, employees.size());
});
}
You execute JUnit and confirm that this code is run.
At that time, created for the searching SQL is next.
select
age, id, name, version
from
employee
where
age < 35
Insert¶
For executing Insert process, you call Dao method that is annotated @Insert
annotation.
Execute insert process¶
You confirm that next code is exists at EmployeeDao
.
@Insert
int insert(Employee employee);
Execute insert process by using this code.
You add next code to EmployeeDaoTest
.
@Test
public void testInsert() {
TransactionManager tm = AppConfig.singleton().getTransactionManager();
Employee employee = new Employee();
// First transaction
// Execute inserting
tm.required(() -> {
employee.name = "HOGE";
employee.age = 20;
dao.insert(employee);
assertNotNull(employee.id);
});
// Second transaction
// Confirm that inserting is success
tm.required(() -> {
Employee employee2 = dao.selectById(employee.id);
assertEquals("HOGE", employee2.name);
assertEquals(Integer.valueOf(20), employee2.age);
assertEquals(Integer.valueOf(1), employee2.version);
});
}
You execute JUnit and confirm that this code is run.
At that time, created for the inserting SQL is next.
insert into Employee (age, id, name, version) values (20, 100, 'HOGE', 1)
Identifier and version number is automatically setting.
Update¶
For executing Update process, you call Dao method that is annotated @Update
annotation.
Execute update process¶
You confirm that next code is exists at EmployeeDao
.
@Update
int update(Employee employee);
Execute update process by using this code.
You add next code to EmployeeDaoTest
.
@Test
public void testUpdate() {
TransactionManager tm = AppConfig.singleton().getTransactionManager();
// First transaction
// Search and update age field
tm.required(() -> {
Employee employee = dao.selectById(1);
assertEquals("ALLEN", employee.name);
assertEquals(Integer.valueOf(30), employee.age);
assertEquals(Integer.valueOf(0), employee.version);
employee.age = 50;
dao.update(employee);
assertEquals(Integer.valueOf(1), employee.version);
});
// Second transaction
// Confirm that updating is success
tm.required(() -> {
Employee employee = dao.selectById(1);
assertEquals("ALLEN", employee.name);
assertEquals(Integer.valueOf(50), employee.age);
assertEquals(Integer.valueOf(1), employee.version);
});
}
You execute JUnit and confirm that this code is run.
At that time, created for the updating SQL is next.
update Employee set age = 50, name = 'ALLEN', version = 0 + 1 where id = 1 and version = 0
The version number that is for optimistic concurrency control is automatically increment.
Delete¶
For executing Delete process, you call Dao method that is annotated @Delete
annotation.
Execute delete process¶
You confirm that next code is exists at EmployeeDao
.
@Delete
int delete(Employee employee);
Execute delete process by using this code.
You add next code to EmployeeDaoTest
.
@Test
public void testDelete() {
TransactionManager tm = AppConfig.singleton().getTransactionManager();
// First transaction
// Execute deleting
tm.required(() -> {
Employee employee = dao.selectById(1);
dao.delete(employee);
});
// Second transaction
// Confirm that deleting is success
tm.required(() -> {
Employee employee = dao.selectById(1);
assertNull(employee);
});
}
You execute JUnit and confirm that this code is run.
At that time, created for the deleting SQL is next.
delete from Employee where id = 1 and version = 0
Identifier and version number is specified in search condition.
Configuration¶
Contents
- Configuration
- Configurable items
- DataSource
- DataSource’s name
- SQL dialect
- Logger
- SQL File Repository
- Controlling REQUIRES_NEW transaction
- Loading classes
- Choosing SQL format contained in exception messages
- Handling unknown columns
- Naming convention for tables and columns
- Naming convention for keys of java.util.Map
- Local transaction manager
- Adding SQL identifiers to the SQLs as a comment
- Command implementors
- Query implementors
- Query timeout
- Max rows
- Fetch size
- Batch size
- Providing entity listeners
- Loading JDBC drivers
- Configuration definition
- Configurable items
The configurable items must be returned from the methods of the implementation class of
the org.seasar.doma.jdbc.Confing
interface.
Configurable items¶
DataSource¶
Return a JDBC DataSource
from the getDataSource
method.
If you need local transactions provided by Doma, return a LocalTransactionDataSource
.
See also: Transaction
Note
Required item
DataSource’s name¶
Return a DataSource’s name from the getDataSourceName
method.
In the environment where multiple DataSources are used, the name is important.
You have to give an unique name to each DataSource.
The default value is the full qualified name of the implementation class of Config
.
SQL dialect¶
Return a Dialect
from the getDialect
method.
You have to choose an appropriate dialect for the database you use.
Doma provides following dialects:
Database | Dialect Name | Description |
---|---|---|
DB2 | Db2Dialect | |
H2 Database Engine 1.2.126 | H212126Dialect | H2 Database Engine 1.2.126 |
H2 Database | H2Dialect | H2 Database Engine 1.3.171 and above |
HSQLDB | HsqldbDialect | |
Microsoft SQL Server 2008 | Mssql2008Dialect | Microsoft SQL Server 2008 |
Microsoft SQL Server | MssqlDialect | Microsoft SQL Server 2012 and above |
MySQL | MySqlDialect | |
Oracle Database 11g | Oracle11Dialect | Oracle Database 11g |
Oracle Database | OracleDialect | Oracle Database 12g and above |
PostgreSQL | PostgresDialect | |
SQLite | SqliteDialect |
These dialect are located in the org.seasar.doma.jdbc.dialect
package.
Note
Required item
Logger¶
Return a JdbcLogger
from the getJdbcLogger
method.
Doma provides following JdbcLogger:
- org.seasar.doma.jdbc.UtilLoggingJdbcLogger
The default JdbcLogger is UtilLoggingJdbcLogger which uses java.util.logging
.
SQL File Repository¶
Return a SqlFileRepository
from the getSqlFileRepository
method.
Doma provides following SqlFileRepositories:
- org.seasar.doma.jdbc.GreedyCacheSqlFileRepository
- org.seasar.doma.jdbc.NoCacheSqlFileRepository
The default SqlFileRepository is GreedyCacheSqlFileRepository which caches the result of SQL parsing without limitation.
Controlling REQUIRES_NEW transaction¶
Return a RequiresNewController
from the getRequiresNewController
method.
RequiresNewController may begin new transactions to makes transaction locked time shorter.
This feature is used only when you use @TableGenerator
which generates identities with the table.
The default RequiresNewController does nothing.
Loading classes¶
Return a ClassHelper
from the getClassHelper
method.
When the application server and framework you use loads classes with their specific way, consider to create your own ClassHelper.
The default ClassHelper loads classes with Class#forName
mainly.
Choosing SQL format contained in exception messages¶
Return a SqlLogType
from the getExceptionSqlLogType
.
The default SqlLogType contains the formatted SQL in exception messages.
Handling unknown columns¶
Return a UnknownColumnHandler
from the getUnknownColumnHandler
method.
In result set mappings, if an unknown column to an entity class is found,
the UnknownColumnHandler handles the situation.
The default UnknownColumnHandler throws an UnknownColumnException
.
Naming convention for tables and columns¶
Return a Naming
from the getNaming
method.
The naming element of @Entity
have preference over this value.
When you specify explicit value to the name elements of @Table
and @Column
,
the naming convention is not applied to them.
The default Naming does nothing.
Naming convention for keys of java.util.Map¶
Return a MapKeyNaming
from the getMapKeyNaming
method.
The MapKeyNaming is used when the result set is mapped to java.util.Map<String, Object>
.
The default MapKeyNaming does nothing.
Local transaction manager¶
Return a LocalTransactionManager
from the getTransactionManager
method.
The getTransactionManager
method throws UnsupportedOperationException
as default.
See also: Transaction
Adding SQL identifiers to the SQLs as a comment¶
Return a Commenter
from the getCommenter
method.
Doma provides following commenter:
- org.seasar.doma.jdbc.CallerCommenter
The default Commenter does nothing.
Command implementors¶
Return a CommandImplementors
from the getCommandImplementors
method.
For example, the CommandImplementors provides you a hook to execute JDBC API.
Query implementors¶
Return a QueryImplementors
from the getQueryImplementors
method.
For example, the QueryImplementors provides you a hook to rewrite SQL statements.
Query timeout¶
Return the query timeout (second) from the getQueryTimeout
method.
This value is used as default in Queries.
Fetch size¶
Return the fetch size from the getFetchSize
method.
This value is used as default in Search.
Batch size¶
Return the batch size from the getBatchSize
method.
This value is used as default in Batch insert,
Batch update and Batch delete.
Providing entity listeners¶
Return a EntityListenerProvider
from the getEntityListenerProvider
method.
When you want to get entity listeners from a dependency injection container,
create your own EntityListenerProvider.
The default EntityListenerProvider get the entity listener from the accepted supplier.
Loading JDBC drivers¶
All JDBC drivers are loaded automatically by the service provider mechanism.
Warning
But in the specific environment, the mechanism doesn’t work appropriately. For example, when you use Apache Tomcat, you will find the case. See also: DriverManager, the service provider mechanism and memory leaks
Configuration definition¶
Simple definition¶
The simple definition is appropriate in following cases:
- The configuration instance isn’t managed in the dependency injection container
- Local transactions is used
@SingletonConfig
public class AppConfig implements Config {
private static final AppConfig CONFIG = new AppConfig();
private final Dialect dialect;
private final LocalTransactionDataSource dataSource;
private final TransactionManager transactionManager;
private AppConfig() {
dialect = new H2Dialect();
dataSource = new LocalTransactionDataSource(
"jdbc:h2:mem:tutorial;DB_CLOSE_DELAY=-1", "sa", null);
transactionManager = new LocalTransactionManager(
dataSource.getLocalTransaction(getJdbcLogger()));
}
@Override
public Dialect getDialect() {
return dialect;
}
@Override
public DataSource getDataSource() {
return dataSource;
}
@Override
public TransactionManager getTransactionManager() {
return transactionManager;
}
public static AppConfig singleton() {
return CONFIG;
}
}
Note
Remember to annotate the class with @SingletonConfig
Specify the above class to the config element of @Dao
.
@Dao(config = AppConfig.class)
public interface EmployeeDao {
@Select
Employee selectById(Integer id);
}
Advanced definition¶
The advanced definition is appropriate in following cases:
- The configuration instance is managed as a singleton object in the dependency injection container
- The transaction manager is provided from the application server or framework you use
Suppose the dialect
and the dataSource
are injected by the dependency injection container:
public class AppConfig implements Config {
private Dialect dialect;
private DataSource dataSource;
@Override
public Dialect getDialect() {
return dialect;
}
public void setDialect(Dialect dialect) {
this.dialect = dialect;
}
@Override
public DataSource getDataSource() {
return dataSource;
}
public void setDataSource(DataSource dataSource) {
this.dataSource = dataSource;
}
}
To inject the instance of the above class to your DAO implementation instance,
you have to annotate your DAO interfaces with @AnnotateWith
:
@Dao
@AnnotateWith(annotations = {
@Annotation(target = AnnotationTarget.CONSTRUCTOR, type = javax.inject.Inject.class),
@Annotation(target = AnnotationTarget.CONSTRUCTOR_PARAMETER, type = javax.inject.Named.class, elements = "\"config\"") })
public interface EmployeeDao {
@Select
Employee selectById(Integer id);
}
@Dao
@AnnotateWith(annotations = {
@Annotation(target = AnnotationTarget.CONSTRUCTOR, type = javax.inject.Inject.class),
@Annotation(target = AnnotationTarget.CONSTRUCTOR_PARAMETER, type = javax.inject.Named.class, elements = "\"config\"") })
public interface DepartmentDao {
@Select
Department selectById(Integer id);
}
To avoid annotating your DAO interfaces with @AnnotateWith
repeatedly,
annotate the arbitrary annotation with it only once:
@AnnotateWith(annotations = {
@Annotation(target = AnnotationTarget.CONSTRUCTOR, type = javax.inject.Inject.class),
@Annotation(target = AnnotationTarget.CONSTRUCTOR_PARAMETER, type = javax.inject.Named.class, elements = "\"config\"") })
public @interface InjectConfig {
}
Then, you can annotate your DAO interfaces with the above @InjectConfig
annotation:
@Dao
@InjectConfig
public interface EmployeeDao {
@Select
Employee selectById(Integer id);
}
@Dao
@InjectConfig
public interface DepartmentDao {
@Select
Department selectById(Integer id);
}
Basic classes¶
Contents
List of basic classes¶
- primitive types except
char
- wrapper class for above primitive types
- enum types
- byte[]
- java.lang.String
- java.lang.Object
- java.math.BigDecimal
- java.math.BigInteger
- java.time.LocalDate
- java.time.LocalTime
- java.time.LocalDateTime
- java.sql.Date
- java.sql.Time
- java.sql.Timestamp
- java.sql.Array
- java.sql.Blob
- java.sql.Clob
- java.sql.SQLXML
- java.util.Date
Differences between temporal classes¶
java.time.LocalDate: | |
---|---|
represents SQL DATE | |
java.time.LocalTime: | |
represents SQL TIME | |
java.time.LocalDateTime: | |
represents SQL TIMESTAMP and may hold nanoseconds if RDBMS supports it | |
java.sql.Date: | represents SQL DATE |
java.sql.Time: | represents SQL TIME |
java.sql.Timestamp: | |
represents SQL TIMESTAMP and may hold nanoseconds if RDBMS supports it | |
java.util.Date: | represents SQL TIMESTAMP and doesn’t hold nanoseconds |
Examples¶
Using in entity class¶
@Entity
public class Employee {
@Id
Integer employeeId;
Optional<String> employeeName;
@Version
Long versionNo;
...
}
Using in domain class¶
@Domain(valueType = String.class)
public class PhoneNumber {
private final String value;
public PhoneNumber(String value) {
this.value = value;
}
public String getValue() {
return value;
}
}
Using in DAO interface¶
@Dao(config = AppConfig.class)
public interface EmployeeDao {
@Select
Employee selectById(Integer employeeId);
@Select
List<String> selectAllName();
}
Domain classes¶
Contents
A domain class represents a table column and it allows you to handle the column value as a Java object. In the Doma framework, a domain means all the values which a data type may contain. In short, a domain class is a user defined class that can be map to a column. The use of the domain classes is optional.
Every domain class is either an internal domain class or an external domain class.
Internal domain classes¶
The internal domain class must be annotated with @Domain
.
The @Domain
’s valueType
element corresponds to a data type of a column.
Specify any type of Basic classes to the valueType
element.
Instantiation with a constructor¶
The default value of the @Domain
’s factoryMethod
element is new
.
The value new
means that the object of annotated class is created with a constructor.
@Domain(valueType = String.class)
public class PhoneNumber {
private final String value;
public PhoneNumber(String value) {
this.value = value;
}
public String getValue() {
return value;
}
public String getAreaCode() {
...
}
}
Note
In Java 14 and later version, you can annotate records with @Domain
:
@Domain(valueType = String.class, accessorMethod = "value")
public record PhoneNumber(String value) {
public String getAreaCode() {
...
}
}
Warning
To annotate records with @Domain
is a little redundant,
because you must specify some properties to @Domain
such as valueType
.
Instead of @Domain
, you can annotate records with @DataType
:
@DataType
public record PhoneNumber(String value) {
public String getAreaCode() {
...
}
}
But note that @DataType
is an experimental feature.
Instantiation with a static factory method¶
To create the object of annotated class with a static factory method,
specify the method name to the @Domain
’s factoryMethod
element.
The method must be static and non-private:
@Domain(valueType = String.class, factoryMethod = "of")
public class PhoneNumber {
private final String value;
private PhoneNumber(String value) {
this.value = value;
}
public String getValue() {
return value;
}
public String getAreaCode() {
...
}
public static PhoneNumber of(String value) {
return new PhoneNumber(value);
}
}
With a static factory method, you can apply the @Domain
annotation to enum types:
@Domain(valueType = String.class, factoryMethod = "of")
public enum JobType {
SALESMAN("10"),
MANAGER("20"),
ANALYST("30"),
PRESIDENT("40"),
CLERK("50");
private final String value;
private JobType(String value) {
this.value = value;
}
public static JobType of(String value) {
for (JobType jobType : JobType.values()) {
if (jobType.value.equals(value)) {
return jobType;
}
}
throw new IllegalArgumentException(value);
}
public String getValue() {
return value;
}
}
Using type parameters in internal domain classes¶
All internal domain class declarations have type parameters:
@Domain(valueType = int.class)
public class Identity<T> {
private final int value;
public Identity(int value) {
this.value = value;
}
public int getValue() {
return value;
}
}
When you create the object of annotated class with a static factory method, the method declaration must have same type parameters that are declared in the class declaration:
@Domain(valueType = int.class, factoryMethod = "of")
public class Identity<T> {
private final int value;
private Identity(int value) {
this.value = value;
}
public int getValue() {
return value;
}
public static <T> Identity<T> of(int value) {
return new Identity<T>(value);
}
}
External domain classes¶
This feature allows you to define arbitrary classes as domain classes,
even if the classes can be annotated with the @Domain
annotation.
To define external domain classes, you have to do as follows:
- Create a class that implements
org.seasar.doma.jdbc.domain.DomainConverter
and annotate@ExternalDomain
to the class - Create a class that is annotated with
@DomainConverters
- Specify the class annotated with
@ExternalDomain
to the@DomainConverters
’svalue
element - Specify the full qualified name of the class annotated with
@DomainConverters
to the option of Annotation processing
Suppose, for instance, there is the PhoneNumber
class that you can change:
public class PhoneNumber {
private final String value;
public PhoneNumber(String value) {
this.value = value;
}
public String getValue() {
return value;
}
public String getAreaCode() {
...
}
}
First, to define the PhoneNumber
class as an external domain class, create following class:
@ExternalDomain
public class PhoneNumberConverter implements DomainConverter<PhoneNumber, String> {
public String fromDomainToValue(PhoneNumber domain) {
return domain.getValue();
}
public PhoneNumber fromValueToDomain(String value) {
if (value == null) {
return null;
}
return new PhoneNumber(value);
}
}
Then create following class and specify the above class to the @DomainConverters
’s value
element:
@DomainConverters({ PhoneNumberConverter.class })
public class DomainConvertersProvider {
}
Finally, specify the full qualified name of the above class to the option of Annotation processing. If you use Gradle, specify the option in the build script as follows:
compileJava {
options {
compilerArgs = ['-Adoma.domain.converters=example.DomainConvertersProvider']
}
}
Using type parameters in external domain classes¶
All external domain class declarations have type parameters:
public class Identity<T> {
private final int value;
public Identity(int value) {
this.value = value;
}
public int getValue() {
return value;
}
}
In the DomainConverter
implementation class,
specify a wildcard ?
as type arguments to the external domain class:
@ExternalDomain
public class IdentityConverter implements DomainConverter<Identity<?>, String> {
public String fromDomainToValue(Identity<?> domain) {
return domain.getValue();
}
@SuppressWarnings("rawtypes")
public Identity<?> fromValueToDomain(String value) {
if (value == null) {
return null;
}
return new Identity(value);
}
}
Example¶
The Domain classes showed above are used as follows:
@Entity
public class Employee {
@Id
Identity<Employee> employeeId;
String employeeName;
PhoneNumber phoneNumber;
JobType jobType;
@Version
Integer versionNo();
...
}
@Dao(config = AppConfig.class)
public interface EmployeeDao {
@Select
Employee selectById(Identity<Employee> employeeId);
@Select
Employee selectByPhoneNumber(PhoneNumber phoneNumber);
@Select
List<PhoneNumber> selectAllPhoneNumber();
@Select
Employee selectByJobType(JobType jobType);
@Select
List<JobType> selectAllJobTypes();
}
Embeddable classes¶
Contents
Embeddable classes group the properties for Entity classes.
Embeddable definition¶
The following code snippet shows how to define an embeddable:
@Embeddable
public class Address {
final String city;
final String street;
@Column(name = "ZIP_CODE")
final String zip;
public Address(String city, String street, String zip) {
this.city = city;
this.street = street;
this.zip = zip;
}
}
The embeddable class is used as the entity field type:
@Entity
public class Employee {
@Id
Integer id;
Address address;
}
The above entity definition is equivalent to following one:
@Entity
public class Employee {
@Id
Integer id;
String city;
String street;
@Column(name = "ZIP_CODE")
String zip;
}
Note
In Java 14 and later version, you can annotate records with @Embeddable
:
@Embeddable
public record Address(
String city,
String street,
@Column(name = "ZIP_CODE")String zip) {
}
Naming convention¶
A naming convention is inherited from the enclosing Entity classes.
Field definition¶
By default, the fields are persistent and correspond to the database columns or result set columns.
The field type must be one of the following:
- Basic classes
- Domain classes
- java.util.Optional, whose element is either Basic classes or Domain classes
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
@Embeddable
public class Address {
...
String street;
}
Method definition¶
There are no limitations in the use of methods.
Entity classes¶
Contents
Entity classes correspond to database tables or query result sets.
Entity definition¶
The following code snippet shows how to define an entity:
@Entity
public class Employee {
...
}
An entity class can inherit other entity class.
The following code snippet shows how to inherit other entity class:
@Entity
public class SkilledEmployee extends Employee {
...
}
Note
In Java 14 and later version, you can annotate records with @Entity
:
@Entity
public record Employee(...) {
}
In the case, the entity is recognize as Immutable
even though the immutable property of @Entity
is false
.
Entity listeners¶
Entity listeners work before/after Doma issues the database modification statements - INSERT, DELETE and UPDATE.
The following code snippet shows how to define an entity listener:
public class EmployeeEntityListener implements EntityListener<Employee> {
...
}
To use the entity listener, specify it to the listener
property within the @Entity
annotation:
@Entity(listener = EmployeeEntityListener.class)
public class Employee {
...
}
An entity subclass inherits parent`s entity listener.
Naming convention¶
Naming convention maps the names between:
- the database tables and the Java entity classes
- the database column and the Java entity fields
The following code snippet shows how to apply the naming convention to an entity:
@Entity(naming = NamingType.SNAKE_UPPER_CASE)
public class EmployeeInfo {
...
}
When the name
property within the @Table
or @Column
annotation is explicitly specified,
the naming convention is ignored.
An entity subclass inherits parent’s naming convention.
Immutable¶
An entity class can be immutable.
The following code snippet shows how to define an immutable entity:
@Entity(immutable = true)
public class Employee {
@Id
final Integer id;
final String name;
@Version
final Integer version;
public Employee(Integer id, String name, Integer version) {
this.id = id;
this.name = name;
this.version = version;
}
...
}
The immutable
property within the @Entity
annotation must be true
.
The persistent field must be final
.
An entity subclass inherits parent’s immutable property.
Table¶
You can specify the corresponding table name with the @Table
annotation:
@Entity
@Table(name = "EMP")
public class Employee {
...
}
Without the @Table
annotation, the table name is resolved by Naming Convention.
Field definition¶
By default, the fields are persistent and correspond to the database columns or result set columns.
The field type must be one of the following:
- Basic classes
- Domain classes
- Embeddable classes
- java.util.Optional, whose element is either Basic classes or Domain classes
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
The following code snippet shows how to define a filed:
@Entity
public class Employee {
...
Integer employeeId;
}
Column¶
You can specify the corresponding column name with the @Column
annotation:
@Column(name = "ENAME")
String employeeName;
To exclude fields from INSERT or UPDATE statements, specify false
to the insertable
or updatable
property within the @Column
annotation:
@Column(insertable = false, updatable = false)
String employeeName;
Without the @Column
annotation, the column name is resolved by Naming Convention.
Note
When the filed type is Embeddable classes, you cannot specify the @Column
annotation to the field.
Id¶
The database primary keys are represented with the @Id
annotation:
@Id
Integer id;
When there is a composite primary key, use the @Id
annotation many times:
@Id
Integer id;
@Id
Integer id2;
Note
When the filed type is Embeddable classes, you cannot specify the @Id
annotation to the field.
Id generation¶
You can instruct Doma to generate id values automatically using the @GeneratedValue
annotation.
The field type must be one of the following:
- the subclass of java.lang.Number
- Domain classes, whose value type is the subclass of java.lang.Number
- java.util.Optional, whose element is either above types
- OptionalInt
- OptionalLong
- OptionalDouble
- the primitive types for number
Note
The generated values are assign to the field only when the field is either null
or less than 0
.
If you use one of the primitive types as filed type,
initialize the field with tha value that is less than 0
, such as -1
.
Id generation by IDENTITY¶
To generate values using the RDBMS IDENTITY function, specify the GenerationType.IDENTITY
enum value
to strategy
property within the @GeneratedValue
:
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
Integer id;
In advance, define the database primary key as IDENTITY.
Warning
All RDBMS does’t support the IDENTITY function.
Id generation by SEQUENCE¶
To generate values using the RDBMS SEQUENCE, specify the GenerationType.SEQUENCE
enum value
to strategy
property within the @GeneratedValue
annotation.
And use the @SequenceGenerator
annotation:
@Id
@GeneratedValue(strategy = GenerationType.SEQUENCE)
@SequenceGenerator(sequence = "EMPLOYEE_SEQ")
Integer id;
In advance, define the SEQUENCE in the database.
The SEQUENCE definitions such as the name, the allocation size and the initial size must
correspond the properties within the @SequenceGenerator
annotation.
Warning
All RDBMS does’t support the SEQUENCE.
Id generation by TABLE¶
To generate values using the RDBMS TABLE, specify the GenerationType.TABLE
enum value
to strategy
property within the @GeneratedValue
annotation.
And use the @TableGenerator
annotation:
@Id
@GeneratedValue(strategy = GenerationType.TABLE)
@TableGenerator(pkColumnValue = "EMPLOYEE_ID")
Integer id;
In advance, define the TABLE in the database.
The TABLE`s definition must correspond to the properties within the @TableGenerator
annotation.
For example, the DDL should be following:
CREATE TABLE ID_GENERATOR(PK VARCHAR(20) NOT NULL PRIMARY KEY, VALUE INTEGER NOT NULL);
You can change the table name and the column names using the properties within the @TableGenerator
annotation.
Version¶
The version fields for optimistic locking are represented with the @Version
annotation.
The field type must be one of the following:
- the subclass of java.lang.Number
- Domain classes, whose value type is the subclass of java.lang.Number
- java.util.Optional, whose element is either above types
- OptionalInt
- OptionalLong
- OptionalDouble
- the primitive types for number
@Version
Integer version;
Note
When the filed type is Embeddable classes, you cannot specify the @Version
annotation to the field.
Tenant Id¶
The tenant id fields are represented with the @TenantId
annotation.
The column corresponding to the annotated field is included in the WHERE clause of UPDATE and DELETE stetements.
@TenantId
String tenantId;
Note
When the filed type is Embeddable classes, you cannot specify the @TenantId
annotation to the field.
Transient¶
If an entity has fields that you don’t want to persist, you can annotate them using @Transient
:
@Transient
List<String> nameList;
OriginalStates¶
If you want to include only changed values in UPDATE statements,
you can define fields annotated with @OriginalStates
.
The fields can hold the original values that were fetched from the database.
Doma uses the values to know which values are changed in the application and includes the only changed values in UPDATE statements.
The following code snippet shows how to define original states:
@OriginalStates
Employee originalStates;
The field type must be the same as the entity type.
Method definition¶
There are no limitations in the use of methods.
Dao interfaces¶
Data Access Object (Dao) is interface for access to database.
Dao definition¶
Dao is defined as interface annotated @Dao
.
Class implemented dao interface is generated in compile time by apt.
Query definition¶
Queries can be defined using annotation.
You use Query builders in default method if you want to build query freely in Java code.
Default method¶
You can write java code freely in default method.
You can get Config
instance associated dao instance if you call Config.get
with argument dao instance.
@Dao(config = AppConfig.class)
public interface EmployeeDao {
default int count() {
Config config = Config.get(this);
SelectBuilder builder = SelectBuilder.newInstance(config);
builder.sql("select count(*) from employee");
return builder.getScalarSingleResult(int.class);
}
}
Example¶
Implementation class is generated by annotation processor on compile. Implementation class is instantiated and used. But if configuration class is managed by DI container then it should be controlled to instantiate implementation class by DI container.
EmployeeDao employeeDao = new EmployeeDaoImpl();
Employee employee = employeeDao.selectById(1);
In default, implementation class name is interface name suffixed with Impl
.
Please refer Annotation processing to change package and suffix.
If you use default constructor then DataSource
is determined by configuration in config
element of @Dao
.
But it can instantiate with DataSource
specified explicitly.
DataSource dataSource = ...;
EmployeeDao employeeDao = new EmployeeDaoImpl(dataSource);
Employee employee = employeeDao.selectById(1);
And also, it can instantiate with Connection
specified explicitly.
Connection connection = ...;
EmployeeDao employeeDao = new EmployeeDaoImpl(connection);
Employee employee = employeeDao.selectById(1);
Dao interface is no need to define as one to one with entity class. One dao interface can handle more than one entity classes.
@Dao(config = AppConfig.class)
public interface MyDao {
@Select
Employee selectEmployeeById(int id);
@Select
Department selectDepartmentByName(String name);
@Update
int updateAddress(Address address);
}
Queries¶
Search¶
Contents
Annotate with @Select
to Dao method for execute search.
@Config(config = AppConfig.class)
public interface EmployeeDao {
@Select
List<Employee> selectByDepartmentName(String departmentName);
...
}
SQL file is required in search. There is no feature that auto generating search SQL.
Note
You need creating entity class depending on search result. For example, result set including column in EMPLOYEE table is accepted Employee class if the Employee class that correspond EMPLOYEE table is declared. But, you need different class from the Employee entity class(For example EmmployeeDepartment class) for result set that is get by joining EMPLOYEE table and DEPARTMENT table.
Query condition¶
You use method parameters for query condition. Available types is below.
- Basic classes
- Domain classes
- Arbitrary type
- Basic classes , Domain classes or arbitrary type are within java.util.Optional
- Basic classes or Domain classes are within java.util.Iterable
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
Parameters count is no limit.
You can set null
to parameter if parameter type is Basic classes or Domain classes.
Parameter must not be null
if the type is other than that.
Query that is used basic type or domain class¶
You declare Basic classes or Domain classes to method or parameter.
@Select
List<Employee> selectByNameAndSalary(String name, Salary salary);
You map method parameter to SQL by using SQL comment in SQL file. In SQL comment, method parameter name is referenced.
select * from employee where employee_name = /* name */'hoge' and salary > /* salary */100
Query that is used arbitrary type¶
You map to SQL by access field or call method there are using by dot .
if using arbitrary parameter type in method parameter.
@Select
List<Employee> selectByExample(Employee employee);
select * from employee where employee_name = /* employee.name */'hoge' and salary > /* employee.getSalary() */100
You can specify multiple parameter.
@Select
List<Employee> selectByEmployeeAndDepartment(Employee employee, Department department);
Mapping to IN clauses by using Iterable.¶
You use subtype of java.lang.Iterable
if excute searching by using IN clauses.
@Select
List<Employee> selectByNames(List<String> names);
select * from employee where employee_name in /* names */('aaa','bbb','ccc')
Single record search¶
You specify method return value type either of below for search single record.
- Basic classes
- Domain classes
- Entity classes
- java.util.Map<String, Object>
- Either Basic classes , Domain classes , Entity classes or java.util.Map<String, Object> is within java.util.Optional
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
@Select
Employee selectByNameAndSalary(String name, BigDecimal salary);
null
is return if return type is not Optional
and result count is 0.
If Ensure of search result is enabled, exception is thrown regardless return value type if search count is 0.
NonUniqueResultException
is thrown if result exists 2 or more.
Multiple record search¶
You specify java.util.List
to method return value type to for search multiple record.
You can use below property in List
.
- Basic classes
- Domain classes
- Entity classes
- java.util.Map<String, Object>
- Either Basic classes or Domain classes is within java.util.Optional
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
@Select
List<Employee> selectByNameAndSalary(String name, Salary salary);
Empty list instead of null
is return if result count is 0.
But if Ensure of search result is enabled, exception is thrown if search count is 0.
Stream search¶
You can use stream search if handle all record at one try as java.util.stream.Stream
rather than recieve as java.util.List
.
There are two kind in stream search such as return the return value and pass Stream
to java.util.Function
.
Pass to the Function¶
You set SelectType.STREAM
to strategy
property within @Select
annotation and
define subtype that is java.util.Function<Stream<TARGET>, RESULT>
or java.util.Function<Stream<TARGET>, RESULT>
to method parameter.
@Select(strategy = SelectType.STREAM)
BigDecimal selectByNameAndSalary(String name, BigDecimal salary, Function<Stream<Employee>, BigDecimal> mapper);
Caller receive stream and pass lambda expression that return result.
EmployeeDao dao = new EmployeeDaoImpl();
BigDecimal result = dao.selectByNameAndSalary(name, salary, stream -> {
return ...;
});
Function<Stream<TARGET>, RESULT>
corresponding type parameter TARGET
must be either of below.
- Basic classes
- Domain classes
- Entity classes
- java.util.Map<String, Object>
- Either Basic classes or Domain classes is within java.util.Optional
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
Type parameter RESULT
must match to Dao method return value.
If Ensure of search result is enabled, exception is thrown if search count is 0.
Return the return value¶
You define java.util.stream.Stream
to method return value.
You can use following type at property within Stream
.
- Basic classes
- Domain classes
- Entity classes
- java.util.Map<String, Object>
- Either Basic classes or Domain classes within java.util.Optional
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
@Select
Stream<Employee> selectByNameAndSalary(String name, BigDecimal salary);
Below is a caller.
EmployeeDao dao = new EmployeeDaoImpl();
try (Stream<Employee> stream = dao.selectByNameAndSalary(name, salary)) {
...
}
If Ensure of search result is enabled, exception is thrown if search count is 0.
Warning
Make sure to close the stream for prevent forgetting of release the resource.
If you do not close the stream, java.sql.ResultSet
or java.sql.PreparedStatement
,
java.sql.Connection
those are not closing.
Note
Consider adoption of pass to Function unless there is some particular reason,
because return the return value has the risk that is forgetting of release the resource.
Doma display warning message at Dao method for attention.
You specify @Suppress
below for suppress warning.
@Select
@Suppress(messages = { Message.DOMA4274 })
Stream<Employee> selectByNameAndSalary(String name, BigDecimal salary);
Collect search¶
You can use collect search if handle result as java.util.Collector
.
You set SelectType.COLLECT
to strategy
property within @Select
annotation and
define subtype that is java.stream.Collector<TARGET, ACCUMULATION, RESULT>
or java.stream.Collector<TARGET, ?, RESULT>
to method parameter.
@Select(strategy = SelectType.COLLECT)
<RESULT> RESULT selectBySalary(BigDecimal salary, Collector<Employee, ?, RESULT> collector);
Caller pass Collector
instance.
EmployeeDao dao = new EmployeeDaoImpl();
Map<Integer, List<Employee>> result =
dao.selectBySalary(salary, Collectors.groupingBy(Employee::getDepartmentId));
Collector<TARGET, ACCUMULATION, RESULT>
corresponding type parameter TARGET
must be either of below.
- Basic classes
- Domain classes
- Entity classes
- java.util.Map<String, Object>
- Either Basic classes or Domain classes within java.util.Optional
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
Type parameter RESULT
must match Dao method return value.
If Ensure of search result is enabled, exception is thrown if search count is 0.
Note
Collect search is the shortcut that pass to Function within stream search.
You can do equivalent by using collect` method in Stream
object that is getting from stream search.
Using search option search¶
You can automatically generate SQL for paging and pessimistic concurrency control from SQL file that is wrote SELECT clauses
by you use SelectOptions
that is represent search option.
You use SelectOptions
in combination with Single record search , Multiple record search , Stream search
You define SelectOptions
as Dao method parameter.
@Config(config = AppConfig.class)
public interface EmployeeDao {
@Select
List<Employee> selectByDepartmentName(String departmentName, SelectOptions options);
...
}
You can get SelectOptions
instance by static get
method.
SelectOptions options = SelectOptions.get();
Paging¶
You specify start position by offset
method and get count by limit
method those are within SelectOptions
,
and pass the SelectOptions
instance to Dao method.
SelectOptions options = SelectOptions.get().offset(5).limit(10);
EmployeeDao dao = new EmployeeDaoImpl();
List<Employee> list = dao.selectByDepartmentName("ACCOUNT", options);
Paging is materialized by rewriting original SQL writing in file and executing. Original SQL must be satisfied condition below.
- SQL is SELECT clauses
- In top level, set operation is not executed like UNION, EXCEPT, INTERSECT.(But using at subquery is able)
- Paging process is not included.
In addition, particular condition must be satisfied according to the database dialect.
If specify offset, there are ORDER BY clauses and all column that is specified at ORDER BY clauses is included in SELECT clauses.
Dialect | Condition |
---|---|
Db2Dialect | If specify offset, there are ORDER BY clauses and all column that is specified at ORDER BY clauses is included in SELECT clauses. |
Mssql2008Dialect | If specify offset, there are ORDER BY clauses and all column that is specified at ORDER BY clauses is included in SELECT clauses. |
MssqlDialect | If specify offset, there are ORDER BY clauses. |
StandardDialect | There are ORDER BY clauses and all column that is specified at ORDER BY clauses is included in SELECT clauses. |
Pessimistic concurrency control¶
You indicate executing pessimistic concurrency control by forUpdate
within SelectOptions
,
and pass the SelectOptions instance to Dao method.
SelectOptions options = SelectOptions.get().forUpdate();
EmployeeDao dao = new EmployeeDaoImpl();
List<Employee> list = dao.selectByDepartmentName("ACCOUNT", options);
The method that name is started forUpdate for pessimistic concurrency control is prepared
such as forUpdateNowait
method that do not wait for getting lock
and forUpdate
method that can specify lock target table or column alias.
Pessimistic concurrency control is executed by rewriting original SQL writing in file. Original SQL must be satisfied condition below.
- SQL is SELECT clauses
- In top level, set operation is not executed like UNION, EXCEPT, INTERSECT.(But using at subquery is able)
- Pessimistic concurrency control process is not included.
Part or all of pessimistic concurrency control method can not used according to the database dialect.
Dialect | Description |
---|---|
Db2Dialect | You can use forUpdate(). |
H2Dialect | You can use forUpdate(). |
HsqldbDialect | You can use forUpdate(). |
Mssql2008Dialect | You can use forUpdate() and forUpdateNowait(). However, FROM clauses in original SQL must consist single table. |
MysqlDialect | You can use forUpdate() |
OracleDialect | You can use forUpdate(), forUpdate(String… aliases), forUpdateNowait(), forUpdateNowait(String… aliases), forUpdateWait(int waitSeconds), forUpdateWait(int waitSeconds, String… aliases). |
PostgresDialect | You can use forUpdate() and forUpdate(String… aliases). |
StandardDialect | You can not use all of pessimistic concurrency control method. |
Aggregate¶
You can get aggregate count by calling count
method within SelectOptions
.
Usually, you use combination in paging option and use in case of getting all count if not narrowing by paging.
SelectOptions options = SelectOptions.get().offset(5).limit(10).count();
EmployeeDao dao = new EmployeeDaoImpl();
List<Employee> list = dao.selectByDepartmentName("ACCOUNT", options);
long count = options.getCount();
Aggregate count is get by using getCount
method within SelectOptions
after calling Dao method.
The getCount
method is return -1
if you do not execute count
method before calling method.
Ensure of search result¶
You specify true
to ensureResult
property within @Select
annotation if you want to ensure of search result count is over 1.
@Select(ensureResult = true)
Employee selectById(Integer id);
NoResultException
is thrown if search result count is 0.
Ensure of mapping search result¶
You specify true
to ensureResultMapping
property within @Select
annotation,
if you want ensure that mapping result set column to all entity properties without exception.
@Select(ensureResultMapping = true)
Employee selectById(Integer id);
ResultMappingException
is thrown if there are property that is not mapping to result set column.
Query timeout¶
You can specify seconds of query timeout to queryTimeout
property within @Update
annotation.
@Select(queryTimeout = 10)
List<Employee> selectAll();
Query timeout that is specified in Configuration is used if queryTimeout
property is not set value.
Fetch size¶
You can specify fetch size to fetchSize
property within @Select
annotation.
@Select(fetchSize = 20)
List<Employee> selectAll();
Fetch size that is specified in Configuration is used if value is not set.
Max row count¶
You can specify max row count to maxRows
property within @Select
annotation.
@Select(maxRows = 100)
List<Employee> selectAll();
Max row count that is is specified in Configuration is used if value is not set.
Naming rule of map’s key¶
You can specify naming rule of map’s key to mapKeyNaming
property within @Select
annotation,
if you want mapping search result to java.util.Map<String, Object>
.
@Select(mapKeyNaming = MapKeyNamingType.CAMEL_CASE)
List<Map<String, Object>> selectAll();
MapKeyNamingType.CAMEL_CASE
present converting column name to camel case.
In addition to there are rule that converting upper case or lower case.
The final conversion result is decide by value specified here and implementation of MapKeyNaming
is specified at Configuration.
SQL log output format¶
You can specify SQL log output format to sqlLog
property within @Select
annotation.
@Select(sqlLog = SqlLogType.RAW)
List<Employee> selectById(Integer id);
SqlLogType.RAW
represent outputting log that is sql with a binding parameter.
Insert¶
Contents
Annotate with @Insert
to Dao method for execute insert.
@Config(config = AppConfig.class)
public interface EmployeeDao {
@Insert
int insert(Employee employee);
@Insert
Result<ImmutableEmployee> insert(ImmutableEmployee employee);
}
By default insert statement is auto generated.
You can mapping arbitrary SQL file by setting true to sqlFile
element of @Insert
.
Entity listener preInsert
method is called when before executing insert if the entity listener is specified Entity classes parameter.
Also entity listener postInsert
method is called when after executing insert.
Return value¶
Return value must be org.seasar.doma.jdbc.Result
that make the entity class an element if parameter is immutable entity class.
Return value must be int
that is represented updated count if the above conditions are not satisfied.
Insert by auto generated SQL¶
Parameter type must be entity class. Specifiable parameter is only one. Parameter must not be null.
@Insert
int insert(Employee employee);
@Insert
Result<ImmutableEmployee> insert(ImmutableEmployee employee);
Identifier¶
Identifier is auto generated and setting if Entity classes identifier is annotated with @GeneratedValue
.
Reference Id generation about cautionary point.
Version numbers¶
If value that explicitly set is over 0 then use the value if Entity classes has property that is annotated with @Version
.
If the value is not set or is less than 0 the value is set 1 automatically.
Control insertion target property¶
insertable¶
Property that is set false to insertable
element of @Column
is excluded from insertion if entity class has property that is annotated with @Column
.
exclude¶
Property that is specified in exclude
element of @Insert
is excluded from insertion.
Even if insertable
element of @Column
is true the property is excluded from insertion if the property is specified by this element.
@Insert(exclude = {"name", "salary"})
int insert(Employee employee);
include¶
Property that is specified in include
element of @Insert
is included to insertion.
If same property are specified in both of include
element and exclude
element of @Insert
the property is excluded from insertion.
Even if property is specified in this element the property is excluded from insertion if insertable
element of @Column
is false.
@Insert(include = {"name", "salary"})
int insert(Employee employee);
excludeNull¶
Property that value is null
is excluded from insertion if excludeNull
element of @Insert
is true.
If this element is true, even if insertable
element of @Column
is true or property is specified in include
element of @Insert
the property is excluded from insertion if value is null
.
@Insert(excludeNull = true)
int insert(Employee employee);
Insert by SQL file¶
To execute insertion by SQL file,
you set true
to sqlFile
element of @Insert
and prepare SQL file that correspond method.
You can use arbitrary type as parameter.
Specifiable parameters count is no limit.
You can set null
to parameter if parameter type is basic type or domain class.
For other type than that, parameter must not be null
.
@Insert(sqlFile = true)
int insert(Employee employee);
@Insert(sqlFile = true)
Result<ImmutableEmployee> insert(ImmutableEmployee employee);
For example, you describe SQL file like below to correspond above method.
insert into employee (id, name, salary, version)
values (/* employee.id */0,
/* employee.name */'hoge',
/* employee.salary */100,
/* employee.version */0)
Identifier auto setting and version value auto setting is not done in insertion by SQL file.
Also, exclude
element and include
element and excludeNull
element of @Insert
are not referenced.
Unique constraint violation¶
UniqueConstraintException
is thrown regardless with or without using sql file if unique constraint violation is occured.
Query timeout¶
You can specify second of query timeout to queryTimeout
element of @Insert
.
@Insert(queryTimeout = 10)
int insert(Employee employee);
This specifying is applied regardless with or without using sql file.
Query timeout that is specified in Configuration is used if queryTimeout
element is not set value.
SQL log output format¶
You can specify SQL log output format to sqlLog
element of @Insert
.
@Insert(sqlLog = SqlLogType.RAW)
int insert(Employee employee);
SqlLogType.RAW
is represented that the log is outputted sql with a bind parameter.
Update¶
Contents
Annotate with @Update
to Dao method for execute update.
@Config(config = AppConfig.class)
public interface EmployeeDao {
@Update
int update(Employee employee);
@Update
Result<ImmutableEmployee> update(ImmutableEmployee employee);
}
By default UPDATE statement is auto generated.
You can mapping arbitrary SQL file by specifying true
to sqlFile
property within the @Update
annotation.
The preUpdate
method of entity listener is called when before executing update if the entity listener is specified at entity class parameter.
Also the postUpdate
method of entity listener method is called when after executing update.
Return value¶
Return value must be org.seasar.doma.jdbc.Result
that has entity class as an element if parameter is immutable entity class.
Return value must be int
that is represented updated count if the above conditions are not satisfied.
Update by auto generated SQL¶
Parameter type must be entity class. Specifiable parameter is only one. Parameter must not be null.
@Update
int update(Employee employee);
@Update
Result<ImmutableEmployee> update(ImmutableEmployee employee);
Version number and optimistic concurrency control in auto generated SQL¶
Optimistic concurrency control is executed if you satisfied below conditions.
- Entity class within parameter has property that is annotated with @Version
- The ignoreVersion element within @Update annotation is false
If optimistic concurrency control is enable, version number is included with identifier in update condition and is updated increment by 1.
OptimisticLockException
representing optimistic concurrency control failure is thrown, if at that time updated count is 0.
Also, OptimisticLockException
is not thrown and version property within entity is increment by 1 if updated count is not 0.
ignoreVersion¶
If ignoreVersion
property within @Update
annotation is true,
version number is not include in update condition and be included in SET clauses within UPDATE statement.
Version number is updated by setting value at application.
OptimisticLockException
is not thrown in this case, even if update count is 0.
@Update(ignoreVersion = true)
int update(Employee employee);
suppressOptimisticLockException¶
In case of suppressOptimisticLockException
property within @Update
is true,
if property that annotated with @Version
is exists then version number is include in update condition and be increment by 1
but OptimisticLockException
is not thrown even if update count is 0.
However, version property value within entity is increment by 1.
@Update(suppressOptimisticLockException = true)
int update(Employee employee);
Control updating target property¶
updatable¶
The updatable
property within @Column
annotation that is specified false
is excluded from updating target if entity class has property that is annotated with @Column
.
exclude¶
Property that is specified with exclude
property within the @Update
annotation is excluded from updating target.
Even if updatable
property within @Column
annotation is specified true
the property is excluded from updating target if the property is specified by this element.
@Update(exclude = {"name", "salary"})
int update(Employee employee);
include¶
Only property that is specified with include
property within @Update
annotation is included to updating target.
If same property are specified with both of include
property and exclude
property within @Update
the property is excluded from updating target.
Even if property is specified with this element the property is excluded from updating target if updatable
property within @Column
annotation is false
.
@Update(include = {"name", "salary"})
int update(Employee employee);
excludeNull¶
Property that value is null
is excluded from updating target if excludeNull
property within @Update
annotation is specified true
.
If this element is true
, even if updatable
property within @Column
annotation is specified true
or property is specified with include
property within @Update
annotation
the property is excluded from insertion target if value is null
.
@Update(excludeNull = true)
int update(Employee employee);
includeUnchanged¶
This element is enable only if property that annotated with @OriginalStates
is exists within updating target entity class.
All property within entity is updating target if this element is true. That is, the column corresponding to all property is included in SET clauses within UPDATE statement.
Only properties that have actually changed since the entity is updating target if this element is false
.
That is, only the column corresponding to modified property is included in SET clauses within UPDATE statement.
@Update(includeUnchanged = true)
int update(Employee employee);
Update by SQL file¶
To execute updating by SQL file,
you set true
to sqlFile
property within @Update
annotation and prepare SQL file that correspond method.
Note
In updating by SQL file, rule is different with or without use ref:populate.
Case of using comment that generating update column list¶
First parameter type must be entity class.
Specifiable parameters count is no limit.
You can set null
to parameter if parameter type is basic type or domain class.
Parameter must not be null
if the type is other than that.
@Update(sqlFile = true)
int update(Employee employee, BigDecimal salary);
@Update(sqlFile = true)
Result<ImmutableEmployee> update(ImmutableEmployee employee, , BigDecimal salary);
For example, you describe SQL file like below to correspond above method.
update employee set /*%populate*/ id = id where salary > /* salary */0
The rule about controlling updating target property is same as Update by auto generated SQL.
Case of not using comment that generating update column list¶
You can use arbitrary type as parameter.
Specifiable parameters count is no limit.
You can set null
to parameter if parameter type is basic type or domain class.
Parameter must not be null
if the type is other than that.
@Update(sqlFile = true)
int update(Employee employee);
@Update(sqlFile = true)
Result<ImmutableEmployee> update(ImmutableEmployee employee);
For example, you describe SQL file like below to correspond above method.
update employee set name = /* employee.name */'hoge', salary = /* employee.salary */100
where id = /* employee.id */0
exclude
property and include
property, excludeNull
property, includeUnchanged
property they are within @Update
annotation are not referenced in updating by SQL file.
Version number and optimistic concurrency control in SQL file¶
Optimistic concurrency control is executed if you satisfied below conditions.
- Entity class is included in parameter.
- Entity class at first from the left within parameter has property that is annotated with @Version
- The ignoreVersion element within @Update annotation is false
However, describing to SQL file for Optimistic concurrency control SQL is application developer’s responsibility. For example like below SQL, you must specify version number in WHERE clauses and increment version number by 1 in SET clauses.
update EMPLOYEE set DELETE_FLAG = 1, VERSION = /* employee.version */1 + 1
where ID = /* employee.id */1 and VERSION = /* employee.version */1
OptimisticLockException
representing optimistic concurrency control failure is thrown, if this SQL updated count is 0.
OptimisticLockException
is not thrown and version property within entity is increment by 1 if updated count is not 0.
If ignoreVersion
property within @Update
annotation is true,
OptimisticLockException
is not thrown even if update count is 0.
Also, version property value within entity is not modified.
@Update(sqlFile = true, ignoreVersion = true)
int update(Employee employee);
If suppressOptimisticLockException
property within @Update
annotation is true,
OptimisticLockException
is not thrown even if update count is 0.
However, version property value within entity is incremented by 1.
@Update(sqlFile = true, suppressOptimisticLockException = true)
int update(Employee employee);
Unique constraint violation¶
UniqueConstraintException
is thrown regardless of with or without using sql file if unique constraint violation is occurred.
Query timeout¶
You can specify seconds of query timeout to queryTimeout
property within @Update
annotation.
@Update(queryTimeout = 10)
int update(Employee employee);
This specifying is applied regardless of with or without using sql file.
Query timeout that is specified in Configuration is used if queryTimeout
property is not set value.
SQL log output format¶
You can specify SQL log output format to sqlLog
property within @Update
annotation.
@Update(sqlLog = SqlLogType.RAW)
int update(Employee employee);
SqlLogType.RAW
represent outputting log that is sql with a binding parameter.
Delete¶
Contents
Annotate with @Delete
to Dao method for execute delete.
@Config(config = AppConfig.class)
public interface EmployeeDao {
@Delete
int delete(Employee employee);
}
By default DELETE statement is auto generated.
You can mapping arbitrary SQL file by specifying true
to sqlFile
property within the @Delete
annotation.
The preDelete
method of entity listener is called when before executing delete if the entity listener is specified at entity class parameter.
Also the postDelete
method of entity listener is called when after executing delete.
Return value¶
Return value must be org.seasar.doma.jdbc.Result
that make the entity class an element if parameter is immutable entity class.
Return value must be int
that is represented updated count if the above conditions are not satisfied.
Delete by auto generated SQL¶
Parameter type must be entity class.
Specifiable parameter is only one.
Parameter must not be null
.
@Delete
int delete(Employee employee);
@Delete
Result<ImmutableEmployee> delete(ImmutableEmployee employee);
Version number and optimistic concurrency control in auto generated SQL¶
Optimistic concurrency control is executed if you satisfied below conditions.
- Entity class within parameter has property that is annotated with @Version
- The ignoreVersion element within @Delete annotation is false
If optimistic concurrency control is enable, version number is included with identifier in delete condition.
OptimisticLockException
representing optimistic concurrency control failure is thrown, if at that time delete count is 0.
ignoreVersion¶
If ignoreVersion
property within @Delete
annotation is true, version number is not include in delete condition.
OptimisticLockException
is not thrown in this case, even if delete count is 0.
@Delete(ignoreVersion = true)
int delete(Employee employee);
suppressOptimisticLockException¶
If suppressOptimisticLockException
property within @Delete
is true
, version number is included in delete condition.
But in this case OptimisticLockException
is not thrown even if delete count is 0.
@Delete(suppressOptimisticLockException = true)
int delete(Employee employee);
Delete by SQL file¶
To execute deleting by SQL file, you set true
to sqlFile
property within @Delete
annotation and prepare SQL file that correspond method.
You can use arbitrary type as parameter.
Specifiable parameters count is no limit.
You can set null
to parameter if parameter type is basic type or domain class.
Parameter must not be null
if the type is other than that.
Entity listener method is not called even if the entity listener is specified to entity.
@Delete(sqlFile = true)
int delete(Employee employee);
For example, you describe SQL file like below to correspond above method.
delete from employee where name = /* employee.name */'hoge'
Version number and optimistic concurrency control in SQL File¶
Optimistic concurrency control is executed if you satisfied below conditions.
- Entity class is included in parameter
- Entity class at first from the left within parameter has property that is annotated with @Version
- The ignoreVersion property within @Delete annotation is false
- The suppressOptimisticLockException property within @Delete annotation is false
However, describing to SQL file for Optimistic concurrency control SQL is application developer’s responsibility. For example like below SQL, you must specify version number in WHERE clauses.
delete from EMPLOYEE where ID = /* employee.id */1 and VERSION = /* employee.version */1
OptimisticLockException
representing optimistic concurrency control failure is thrown, if this SQL delete count is 0.
OptimisticLockException
is not thrown if delete count is not 0.
ignoreVersion¶
If ignoreVersion
property within @Delete
annotation is true
,
OptimisticLockException
is not thrown even if delete count is 0.
@Delete(sqlFile = true, ignoreVersion = true)
int delete(Employee employee);
suppressOptimisticLockException¶
If suppressOptimisticLockException
property within @Delete
annotation is true
,
OptimisticLockException
is not thrown even if delete count is 0.
@Delete(sqlFile = true, suppressOptimisticLockException = true)
int delete(Employee employee);
Query timeout¶
You can specify seconds of query timeout to queryTimeout
property within @Delete
annotation.
@Delete(queryTimeout = 10)
int delete(Employee employee);
This specifying is applied regardless of with or without using sql file.
Query timeout that is specified in Configuration is used if queryTimeout
property is not set value.
SQL log output format¶
You can specify SQL log output format to sqlLog
property within @Delete
annotation.
@Delete(sqlLog = SqlLogType.RAW)
int delete(Employee employee);
SqlLogType.RAW
represent outputting log that is sql with a binding parameter.
Batch insert¶
Contents
Annotate with @BatchInsert
to Dao method for execute batch insert.
@Config(config = AppConfig.class)
public interface EmployeeDao {
@BatchInsert
int[] insert(List<Employee> employees);
@BatchInsert
BatchResult<ImmutableEmployee> insert(List<ImmutableEmployee> employees);
}
By default INSERT statement is auto generated.
You can mapping arbitrary SQL file by specifying true
to sqlFile
property within the @BatchInsert
annotation.
The preInsert
method of entity listener is called each entity when before executing insert if the entity listener is specified at Entity classes parameter.
Also the postInsert
method of entity listener method is called each entity when after executing insert.
Return value¶
Return value must be org.seasar.doma.jdbc.BatchResult
that has entity class as an element if parameter Iterable
subtype element is immutable entity class.
Return value must be int[]
that is represented each inserting process’s updated count if the above conditions are not satisfied.
Batch insert by auto generated SQL¶
Parameter type must be java.lang.Iterable
subtype that has Entity classes as an element.
Specifiable parameter is only one.
Parameter must not be null
.
Return value array element count become equal Iterable
element count.
Insert count is returned to array each element.
Identifier¶
If annotated with @GeneratedValue
at Entity classes identifier, the identifier is auto generated and set.
You reference Id generation about cautionary point.
Version number¶
If value that explicitly set is over 0
then use the value if Entity classes has property that is annotated with @Version
.
If the value is not set or is less than 0
the value is set 1
automatically.
Insert target property¶
insertable¶
The insertable
property within @Column
annotation that is specified false
is excluded from insert target if Entity classes has property that is annotated with @Column
.
exclude¶
Property that is specified with exclude
property within the @BatchInsert
annotation is excluded from inserting target.
Even if insertable
property within @Column
annotation is specified true
the property is excluded from inserting target if the property is specified by this element.
@BatchInsert(exclude = {"name", "salary"})
int[] insert(List<Employee> employees);
include¶
Only property that is specified with include
property within @BatchInsert
annotation is included to inserting target.
If same property are specified with both of include
property and exclude
property within @BatchInsert
the property is excluded from updating target.
Even if property is specified with this element the property is excluded from inserting target if insertable
property within @Column
annotation is false
.
@BatchInsert(include = {"name", "salary"})
int[] insert(List<Employee> employees);
Batch insert by SQL file¶
To execute batch inserting by SQL file,
you set true
to sqlFile
property within @BatchInsert
annotation and prepare SQL file that correspond method.
@BatchInsert(sqlFile = true)
int[] insert(List<Employee> employees);
@BatchInsert(sqlFile = true)
BatchResult<ImmutableEmployee> insert(List<ImmutableEmployee> employees);
Parameter type must be java.lang.Iterable
subtype that has Entity classes as an element.
Specifiable parameter is only one.
Parameter must not be null
.
Return value array element count become equal Iterable
element count.
Insert count is returned to array each element.
If entity listener is specified at Entity classes then entity listener method is not called.
For example, you describe SQL like below to correspond above method.
insert into employee (id, name, salary, version)
values (/* employees.id */0, /* employees.name */'hoge', /* employees.salary */100, /* employees.version */0)
Parameter name indicate java.lang.Iterable
subtype element in SQL file.
Identifier auto setting and version number auto setting are not executed in batch insert by SQL file.
Also, exclude
property and include
property within @BatchInsert
are not referenced.
Unique constraint violation¶
UniqueConstraintException
is thrown regardless of with or without using sql file if unique constraint violation is occurred.
Query timeout¶
You can specify seconds of query timeout to queryTimeout
property within @BatchInsert
annotation.
@BatchInsert(queryTimeout = 10)
int[] insert(List<Employee> employees);
This specifying is applied regardless of with or without using sql file.
Query timeout that is specified in config class is used if queryTimeout
property is not set value.
Batch size¶
You can specify batch size to batchSize
property within @BatchInsert
annotation.
@BatchInsert(batchSize = 10)
int[] insert(List<Employee> employees);
This specify is applied Regardless of using or not using SQL file.
It you do not specify the value to batchSize
property, batch size that is specified at Configuration class is applied.
SQL log output format¶
You can specify SQL log output format to sqlLog
property within @BatchInsert
annotation.
@BatchInsert(sqlLog = SqlLogType.RAW)
int insert(Employee employee);
SqlLogType.RAW
represent outputting log that is sql with a binding parameter.
Batch update¶
Contents
Annotate with @BatchUpdate
to Dao method for execute batch update.
@Config(config = AppConfig.class)
public interface EmployeeDao {
@BatchUpdate
int[] update(List<Employee> employees);
@BatchUpdate
BatchResult<ImmutableEmployee> update(List<ImmutableEmployee> employees);
}
By default UPDATE statement is auto generated.
You can mapping arbitrary SQL file by specifying true
to sqlFile
property within the @BatchUpdate
annotation.
The preUpdate
method of entity listener is called each entity when before executing update if the entity listener is specified at Entity classes parameter.
Also the postUpdate
method of entity listener method is called each entity when after executing update.
Return value¶
Return value must be org.seasar.doma.jdbc.BatchResult
that has entity class as an element if parameter Iterable
subtype element is immutable entity class.
Return value must be int[]
that is represented each updating process’s updated count if the above conditions are not satisfied.
Batch update by auto generated SQL¶
Parameter type must be java.lang.Iterable
subtype that has Entity classes as an element.
Specifiable parameter is only one.
Parameter must not be null
.
Return value array element count become equal Iterable
element count.
Update count is returned to array each element.
Version number and optimistic concurrency control in auto generated SQL¶
Optimistic concurrency control is executed if you satisfied below conditions.
- Entity classes within parameter java.lang.Iterable subtype has property that is annotated with @Version
- The ignoreVersion element within @BatchUpdate annotation is false
If optimistic concurrency control is enable, version number is included with identifier in update condition and is updated increment by 1.
BatchOptimisticLockException
representing optimistic concurrency control failure is thrown, if at that time updated count is 0.
Also, BatchOptimisticLockException
is not thrown and version property within entity is increment by 1 if updated count is 1.
ignoreVersion¶
If ignoreVersion
property within @BatchUpdate
annotation is true,
version number is not include in update condition and be included in SET clauses within UPDATE statement.
Version number is updated by setting value at application.
BatchOptimisticLockException
is not thrown in this case, even if update count is 0.
@BatchUpdate(ignoreVersion = true)
int[] update(List<Employee> employees);
suppressOptimisticLockException¶
In case of suppressOptimisticLockException
property within @BatchUpdate
is true
,
if property that annotated with @Version
is exists then version number is include in update condition and be increment by 1
but BatchOptimisticLockException
is not thrown even if update count is 0.
However, version property value within entity is increment by 1.
@BatchUpdate(suppressOptimisticLockException = true)
int[] update(List<Employee> employees);
Update target property¶
updatable¶
The updatable
property within @Column
annotation that is specified false
is excluded from updating target if Entity classes has property that is annotated with @Column
.
exclude¶
Property that is specified with exclude
property within the @BatchUpdate
annotation is excluded from updating target.
Even if updatable
property within @Column
annotation is specified true
the property is excluded from updating target if the property is specified by this element.
@BatchUpdate(exclude = {"name", "salary"})
int[] update(List<Employee> employees);
include¶
Only property that is specified with include
property within @BatchUpdate
annotation is included to updating target.
If same property are specified with both of include
property and exclude
property within @BatchUpdate
the property is excluded from updating target.
Even if property is specified with this element the property is excluded from updating target if updatable
property within @Column
annotation is false
.
@BatchUpdate(include = {"name", "salary"})
int[] update(List<Employee> employees);
Batch update by SQL file¶
To execute batch updating by SQL file,
you set true
to sqlFile
property within @BatchUpdate
annotation and prepare SQL file that correspond method.
Note
In batch updating by SQL file, rule is different according to using or not using Population directive.
Case of using comment that generating update column list¶
@BatchUpdate(sqlFile = true)
int[] update(List<Employee> employees);
@BatchUpdate
BatchResult<ImmutableEmployee> update(List<ImmutableEmployee> employees);
Parameter type must be java.lang.Iterable
subtype that has Entity classes as an element.
Specifiable parameter is only one.
Parameter must not be null
.
Return value array element count become equal Iterable
element count.
Update count is returned to array each element.
For example, you describe SQL like below to correspond above method.
update employee set /*%populate*/ id = id where name = /* employees.name */'hoge'
Parameter name indicate Iterable
subtype element in SQL file.
The rule that is about update target property equals Batch update by auto generated SQL.
Case of not using comment that generating update column list¶
@BatchUpdate(sqlFile = true)
int[] update(List<Employee> employees);
@BatchUpdate
BatchResult<ImmutableEmployee> update(List<ImmutableEmployee> employees);
Parameter type must be java.lang.Iterable
subtype that has arbitrary type as an element.
Specifiable parameter is only one.
Parameter must not be null
.
Return value array element count become equal Iterable
element count.
Update count is returned to array each element.
For example, you describe SQL like below to correspond above method.
update employee set name = /* employees.name */'hoge', salary = /* employees.salary */100
where id = /* employees.id */0
Parameter name indicate Iterable
subtype element in SQL file.
Version number auto updating is not executed in batch update by SQL file.
Also, exclude
property and include
property within @BatchUpdate
annotation are not referenced.
Version number and optimistic concurrency control in SQL file¶
Optimistic concurrency control is executed if you satisfied below conditions.
- java.lang.Iterable subtype element in parameter is Entity classes and has property that is annotated @Version existing at Entity classes.
- ignoreVersion property within @BatchUpdate annotation is false.
However, describing to SQL file for Optimistic concurrency control SQL is application developer’s responsibility. For example like below SQL, you must specify version number in WHERE clauses and increment version number by 1 in SET clauses.
update EMPLOYEE set DELETE_FLAG = 1, VERSION = /* employees.version */1 + 1
where ID = /* employees.id */1 and VERSION = /* employees.version */1
BatchOptimisticLockException
representing optimistic concurrency control failure is thrown, if this SQL updated count is 0.
BatchOptimisticLockException
is not thrown and version property within entity is increment by 1 if updated count is not 0.
If optimistic concurrency control is enable, version number is included with identifier in update condition and is updated increment by 1.
BatchOptimisticLockException
representing optimistic concurrency control failure is thrown, if at that time updated count is 0.
On the other hand, if update count is 1, BatchOptimisticLockException
is not thrown and entity version property is increment by 1.
If ignoreVersion
property within @BatchUpdate
annotation is true,
BatchOptimisticLockException
is not thrown, even if update count is 0 or multiple.
Also, entity version property is not modified.
@BatchUpdate(sqlFile = true, ignoreVersion = true)
int[] update(List<Employee> employees);
In case of suppressOptimisticLockException
property within @BatchUpdate
is true
,
BatchOptimisticLockException
is not thrown even if update count is 0.
However, entity version property value is incremented by 1.
@BatchUpdate(sqlFile = true, suppressOptimisticLockException = true)
int[] update(List<Employee> employees);
Unique constraint violation¶
UniqueConstraintException
is thrown regardless of with or without using sql file if unique constraint violation is occurred.
Query timeout¶
You can specify seconds of query timeout to queryTimeout
property within @BatchUpdate
annotation.
@BatchUpdate(queryTimeout = 10)
int[] update(List<Employee> employees);
This specifying is applied regardless of with or without using sql file.
Query timeout that is specified in config class is used if queryTimeout
property is not set value.
Batch size¶
You can specify batch size to batchSize
property within @BatchUpdate
annotation.
@BatchUpdate(batchSize = 10)
int[] update(List<Employee> employees);
This specify is applied Regardless of using or not using SQL file.
It you do not specify the value to batchSize
property, batch size that is specified at Configuration class is applied.
SQL log output format¶
You can specify SQL log output format to sqlLog
property within @BatchUpdate
annotation.
@BatchUpdate(sqlLog = SqlLogType.RAW)
int[] update(List<Employee> employees);
SqlLogType.RAW
represent outputting log that is sql with a binding parameter.
Batch delete¶
Contents
Annotate with @BatchDelete
to Dao method for execute batch delete.
@Config(config = AppConfig.class)
public interface EmployeeDao {
@BatchDelete
int[] delete(List<Employee> employees);
...
}
By default DELETE statement is auto generated.
You can mapping arbitrary SQL file by specifying true
to sqlFile
property within the @BatchDelete
annotation.
The preDelete
method of entity listener is called each entity when before executing delete if the entity listener is specified at Entity classes parameter.
Also the postDelete
method of entity listener method is called each entity when after executing delete.
Return value¶
Return value must be org.seasar.doma.jdbc.BatchResult
that has entity class as an element if parameter Iterable
subtype element is immutable entity class.
Return value must be int[]
that is represented each deleting process’s updated count if the above conditions are not satisfied.
Batch delete by auto generated SQL¶
Parameter type must be java.lang.Iterable
subtype that has Entity classes as an element.
Specifiable parameter is only one.
Parameter must not be null
.
Return value array element count become equal Iterable
element count.
Delete count is returned to array each element.
Version number and optimistic concurrency control in auto generated SQL¶
Optimistic concurrency control is executed if you satisfied below conditions.
- Entity classes within parameter java.lang.Iterable subtype has property that is annotated with @Version
- The ignoreVersion element within @BatchDelete annotation is false
If optimistic concurrency control is enable, version number is included with identifier in delete condition.
BatchOptimisticLockException
representing optimistic concurrency control failure is thrown, if at that time deleted count is 0.
ignoreVersion¶
If ignoreVersion
property within @BatchDelete
annotation is true
, version number is not include in delete condition.
BatchOptimisticLockException
is not thrown, even if delete count is 0.
@BatchDelete(ignoreVersion = true)
int[] delete(List<Employee> employees);
suppressOptimisticLockException¶
In case of suppressOptimisticLockException
property within @BatchDelete
is true
,
version number is include in delete condition but BatchOptimisticLockException
is not thrown even if delete count is 0.
@BatchDelete(suppressOptimisticLockException = true)
int[] delete(List<Employee> employees);
Batch delete by SQL file¶
To execute batch deleting by SQL file,
you set true
to sqlFile
property within @BatchDelete
annotation and prepare SQL file that correspond method.
@BatchDelete(sqlFile = true)
int[] delete(List<Employee> employees);
Parameter type must be java.lang.Iterable
subtype that has arbitrary type as an element.
Specifiable parameter is only one.
Parameter must not be null
.
Return value array element count become equal Iterable
element count.
Delete count is returned to array each element.
For example, you describe SQL like below to correspond above method.
delete from employee where name = /* employees.name */'hoge'
Parameter name indicate java.lang.Iterable
subtype element in SQL file.
Version number and optimistic concurrency control in SQL file¶
Optimistic concurrency control is executed if you satisfied below conditions.
- The parameter
java.lang.Iterable
subtype has Entity classes element, the Entity classes element is annotated with @Version - The ignoreVersion element within @BatchDelete annotation is false
However, describing to SQL file for Optimistic concurrency control SQL is application developer’s responsibility. For example like below SQL, you must specify version number in WHERE clauses.
delete from EMPLOYEE where ID = /* employees.id */1 and VERSION = /* employees.version */1
BatchOptimisticLockException
representing optimistic concurrency control failure is thrown, if deleted count is 0 or multiple in this SQL.
ignoreVersion¶
If ignoreVersion
property within @BatchDelete
annotation is true,
BatchOptimisticLockException
is not thrown even if deleted count is 0 or multiple.
@BatchDelete(sqlFile = true, ignoreVersion = true)
int[] delete(List<Employee> employees);
suppressOptimisticLockException¶
If suppressOptimisticLockException
property within @BatchDelete
is true
,
BatchOptimisticLockException
is not thrown even if deleted count is 0 or multiple.
@BatchDelete(sqlFile = true, suppressOptimisticLockException = true)
int[] delete(List<Employee> employees);
Query timeout¶
You can specify seconds of query timeout to queryTimeout
property within @BatchDelete
annotation.
@BatchDelete(queryTimeout = 10)
int[] delete(List<Employee> employees);
This specifying is applied regardless of with or without using sql file.
Query timeout that is specified in config class is used if queryTimeout
property is not set value.
Batch size¶
You can specify batch size to batchSize
property within @BatchDelete
annotation.
@BatchDelete(batchSize = 10)
int[] delete(List<Employee> employees);
This specify is applied regardless of using or not using SQL file.
It you do not specify the value to batchSize
property, batch size that is specified at Configuration class is applied.
SQL log output format¶
You can specify SQL log output format to sqlLog
property within @BatchDelete
annotation.
@BatchDelete(sqlLog = SqlLogType.RAW)
int[] delete(List<Employee> employees);
SqlLogType.RAW
represent outputting log that is sql with a binding parameter.
Stored function¶
Contents
To call stored functions, you must annotate DAO methods with the @Function
annotation:
@Config(config = AppConfig.class)
public interface EmployeeDao {
@Function
Integer execute(@In Integer id, @InOut Reference<BigDecimal> salary);
...
}
Return type¶
If the stored function returns nothing, the return type must be void
.
If the stored function returns a single result, the return type must be one of the following:
- Basic classes
- Domain classes
- Entity classes
- java.util.Map<String, Object>
- java.util.Optional, whose element type is one of Basic classes, Domain classes, Entity classes, and java.util.Map<String, Object>
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
If the stored function returns a result list, the return type must be java.util.List
and its element type must be one of the following:
- Basic classes
- Domain classes
- Entity classes
- java.util.Map<String, Object>
- java.util.Optional whose element type is either Basic classes or Domain classes
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
Function name¶
The annotated method name is recognized as the function name by default.
To override it, you can specify the value to the @Function
’s name
element:
@Function(name = "calculateSalary")
void execute(@In Integer id, @InOut Reference<BigDecimal> salary);
Parameters¶
The order of stored function’s parameters must correspond with the order of DAO method parameters.
All parameters must be annotated with one of following annotations:
- @In
- @InOut
- @Out
- @ResultSet
IN parameter¶
To indicate IN parameters, annotate corresponding DAO method parameters with the @In
annotation.
The type of the DAO method parameter must be one of the following:
- Basic classes
- Domain classes
- java.util.Optional whose element type is either Basic classes or Domain classes
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
Suppose you have the following definition:
@Function
void execute(@In Integer id);
You can invoke the method as follows:
EmployeeDao dao = new EmployeeDaoImpl();
dao.execute(1);
INOUT parameter¶
To indicate INOUT parameters, annotate corresponding DAO method parameters with
the @InOut
annotation.
The type of the DAO method parameter must be org.seasar.doma.jdbc.Reference
and its type parameter must be one of the following:
- Basic classes
- Domain classes
- java.util.Optional whose element type is either Basic classes or Domain classes
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
Suppose you have the following definition:
@Function
void execute(@InOut Reference<BigDecimal> salary);
You can invoke the method as follows:
EmployeeDao dao = new EmployeeDaoImpl();
BigDecimal in = new BigDecimal(100);
Reference<BigDecimal> ref = new Reference<BigDecimal>(in);
dao.execute(ref);
BigDecimal out = ref.get();
OUT parameter¶
To indicate OUT parameters, annotate corresponding DAO method parameters with
the @Out
annotation.
The type of the DAO method parameter must be org.seasar.doma.jdbc.Reference
and its type parameter must be one of the following:
- Basic classes
- Domain classes
- java.util.Optional whose element type is either Basic classes or Domain classes
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
Suppose you have the following definition:
@Function
Integer execute(@Out Reference<BigDecimal> salary);
You can invoke the method as follows:
EmployeeDao dao = new EmployeeDaoImpl();
Reference<BigDecimal> ref = new Reference<BigDecimal>();
Integer result = dao.execute(ref);
BigDecimal out = ref.get();
Cursor or result set¶
To indicate cursors or result sets,
annotate corresponding DAO method parameters with the @ResultSet
annotation.
The DAO method parameter type must be java.util.List
and its element type must be one of the following:
- Basic classes
- Domain classes
- Entity classes
- java.util.Map<String, Object>
- java.util.Optional whose element type is either Basic classes or Domain classes
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
Suppose you have the following definition:
@Function
void execute(@ResultSet List<Employee> employee);
You can invoke the method as follows:
EmployeeDao dao = new EmployeeDaoImpl();
List<Employee> employees = new ArrayList<Employee>();
dao.execute(employees);
for (Employee e : employees) {
...
}
Stored procedure¶
Contents
To call stored procedures, you must annotate DAO methods with the @Procedure
annotation:
@Config(config = AppConfig.class)
public interface EmployeeDao {
@Procedure
void execute(@In Integer id, @InOut Reference<BigDecimal> salary);
...
}
Return Type¶
The return type must be void
.
Procedure name¶
The annotated method name is recognized as the procedure name by default.
To override it, you can specify the value to the @Procedure
’s name
element:
@Procedure(name = "calculateSalary")
void execute(@In Integer id, @InOut Reference<BigDecimal> salary);
Parameters¶
The order of stored procedure’s parameters must correspond with the order of DAO method parameters.
All parameters must be annotated with one of following annotations:
- @In
- @InOut
- @Out
- @ResultSet
IN parameter¶
To indicate IN parameters, annotate corresponding DAO method parameters with the @In
annotation.
The type of the DAO method parameter must be one of the following:
- Basic classes
- Domain classes
- java.util.Optional whose element type is either Basic classes or Domain classes
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
Suppose you have the following definition:
@Procedure
void execute(@In Integer id);
You can invoke the method as follows:
EmployeeDao dao = new EmployeeDaoImpl();
dao.execute(1);
INOUT parameter¶
To indicate INOUT parameters, annotate corresponding DAO method parameters with
the @InOut
annotation.
The type of the DAO method parameter must be org.seasar.doma.jdbc.Reference
and its type parameter must be one of the following:
- Basic classes
- Domain classes
- java.util.Optional whose element type is either Basic classes or Domain classes
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
Suppose you have the following definition:
@Procedure
void execute(@InOut Reference<BigDecimal> salary);
You can invoke the method as follows:
EmployeeDao dao = new EmployeeDaoImpl();
BigDecimal in = new BigDecimal(100);
Reference<BigDecimal> ref = new Reference<BigDecimal>(in);
dao.execute(ref);
BigDecimal out = ref.get();
OUT parameter¶
To indicate OUT parameters, annotate corresponding DAO method parameters with
the @Out
annotation.
The type of the DAO method parameter must be org.seasar.doma.jdbc.Reference
and its type parameter must be one of the following:
- Basic classes
- Domain classes
- java.util.Optional whose element type is either Basic classes or Domain classes
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
Suppose you have the following definition:
@Procedure
void execute(@Out Reference<BigDecimal> salary);
You can invoke the method as follows:
EmployeeDao dao = new EmployeeDaoImpl();
Reference<BigDecimal> ref = new Reference<BigDecimal>();
dao.execute(ref);
BigDecimal out = ref.get();
Cursor or result set¶
To indicate cursors or result sets,
annotate corresponding DAO method parameters with the @ResultSet
annotation.
The DAO method parameter type must be java.util.List
and its element type must be one of the following:
- Basic classes
- Domain classes
- Entity classes
- java.util.Map<String, Object>
- java.util.Optional whose element type is either Basic classes or Domain classes
- java.util.OptionalInt
- java.util.OptionalLong
- java.util.OptionalDouble
Suppose you have the following definition:
@Procedure
void execute(@ResultSet List<Employee> employees);
You can invoke the method as follows:
EmployeeDao dao = new EmployeeDaoImpl();
List<Employee> employees = new ArrayList<Employee>();
dao.execute(employees);
for (Employee e : employees) {
...
}
Factories¶
Contents
To get instances from the factory methods of java.sql.Connection
,
annotate DAO methods with one of following annotations;
- @ArrayFactory
- @BlobFactory
- @ClobFactory
- @NClobFactory
- @SQLXMLFactory
Creating Array instances¶
A return type must be java.sql.Array
and the number of parameters must be one.
The parameter type must be an array type and the parameter must not be null.
Specify a database type name to the @ArrayFactory
’s typeName
element:
@ArrayFactory(typeName = "integer")
Array createIntegerArray(Integer[] elements);
Creating Blob instances¶
A return type must be java.sql.Blob
and the number of parameters must be zero:
@BlobFactory
Blob createBlob();
Creating Clob instances¶
A return type must be java.sql.Clob
and the number of parameters must be zero:
@ClobFactory
Clob createClob();
Creating NClob instances¶
A return type must be java.sql.NClob
and the number of parameters must be zero:
@NClobFactory
NClob createNClob();
Creating SQLXML instances¶
A return type must be java.sql.SQLXML
and the number of parameters must be zero:
@SQLXMLFactory
SQLXML createSQLXML();
Script¶
Contents
To run a series of static SQL statements,
annotate DAO methods with @Script
:
@Config(config = AppConfig.class)
public interface EmployeeDao {
@Script
void createTable();
...
}
The return type of the method must be void
and the number of parameters must be zero.
Script representation¶
Scripts in files¶
Encoding¶
The script files must be saved as UTF-8 encoded.
Location¶
The script files must be located in directories below a “META-INF” directory which is included in CLASSPATH.
Format of file path¶
The script file path must follow the following format:
META-INF/path-format-of-dao-interface/dao-method.script
For example, when the DAO interface name is aaa.bbb.EmployeeDao
and the DAO method name is createTable
, the script file path is as follows:
META-INF/aaa/bbb/EmployeeDao/createTable.script
Dependency on a specific RDBMS¶
You can specify a dependency on a specific RDBMS by file name. To do this, put the hyphen “-” and RDBMS name before the extension “.sql”. For example, the file path specific to PostgreSQL is as follows:
META-INF/aaa/bbb/EmployeeDao/createTable-postgres.script
The script files specific to RDBMSs are given priority. For example, in the environment where PostgreSQL is used, “META-INF/aaa/bbb/EmployeeDao/createTable-postgres.script” is chosen instead of “META-INF/aaa/bbb/EmployeeDao/createTable.script”.
See also Dependency on a specific RDBMS.
Scripts in annotation¶
You can specify scripts to DAO methods with the @Sql
annotation:
@Config(config = AppConfig.class)
public interface EmployeeDao {
@Sql("create table employee (id integer, name varchar(200))")
@Script
void createTable();
...
}
See also SQL templates in annotations.
Delimiter¶
There are two kinds of delimiters in scripts:
- statement delimiter
- block delimiter
The statement delimiter is always a semicolon ;
.
The block delimiter is determined by a Dialect
instance.
The RDBMS block delimiters are as follows:
RDBMS | Dialect | block delimiter |
---|---|---|
DB2 | Db2Dialect | @ |
H2 Database Engine 1.2.126 | H212126Dialect | |
H2 Database | H2Dialect | |
HSQLDB | HsqldbDialect | |
Microsoft SQL Server 2008 | Mssql2008Dialect | GO |
Microsoft SQL Server | MssqlDialect | GO |
MySQL | MySqlDialect | / |
Oracle Database | OracleDialect | / |
PostgreSQL | PostgresDialect | $$ |
SQLite | SqliteDialect |
You can also specify the block delimiter to @Script
’s blockDelimiter
element:
@Script(blockDelimiter = "GO")
void createProcedure();
The corresponding script file is as follows:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[MY_PROCEDURE]
AS
BEGIN
SET NOCOUNT ON;
END
GO
Stopping on error¶
Script running will stop when any statement execution fails.
To continue the script running, specify false
to the haltOnError
element:
@Script(haltOnError = false)
void createTable();
Example¶
Following script is valid for Oracle Database:
/*
* table creation statement
*/
create table EMPLOYEE (
ID numeric(5) primary key, -- identifier is not generated automatically
NAME varchar2(20) -- first name only
);
/*
* insert statement
*/
insert into EMPLOYEE (ID, NAME) values (1, 'SMITH');
/*
* procedure creatiton block
*/
create or replace procedure proc
( cur out sys_refcursor,
employeeid in numeric
) as
begin
open cur for select * from employee where id > employeeid order by id;
end proc_resultset;
/
/*
* procedure creation block
*/
create or replace procedure proc2
( cur out sys_refcursor,
employeeid in numeric
) as
begin
open cur for select * from employee where id > employeeid order by id;
end proc_resultset;
/
You can use both a single line comment --
and a multi-line comment /* ... */
.
Each statement must end with a semicolon ;
.
Be careful that a new line doesn’t mean the end of a statement.
In this example, the slash /
is a block delimiter.
The block delimiter must appear at the beginning of a line and be followed by a new line.
SQL processor¶
Contents
SQL processors can handle the SQL statements generated from corresponding SQL templates.
To mark a DAO method as an SQL processor, annotate the method with @SqlProcessor
:
@Config(config = AppConfig.class)
public interface EmployeeDao {
@SqlProcessor
<R> R process(Integer id, BiFunction<Config, PreparedSql, R> handler);
...
}
Warning
Being aware of SQL injection vulnerabilities is essential. If it’s possible, consider alternative ways other than SQL processors.
Return type¶
The return type must be the same type as the third type parameter of BiFunction
:
@SqlProcessor
String process(Integer id, BiFunction<Config, PreparedSql, String> handler);
If the return type is void
, the third type parameter of BiFunction
must be Void
:
@SqlProcessor
void process(Integer id, BiFunction<Config, PreparedSql, Void> handler);
Parameter¶
Include a parameter whose type is BiFunction
.
The BiFunction
parameter accepts a configuration and an SQL statement then processes them.
Parameters other than the BiFunction
parameter are used in the SQL template.
Example¶
Suppose you want to change the SQL statement generated from an SQL template and execute it:
EmployeeDao dao = ...
dao.process(1, (config, preparedSql) -> {
String sql = preparedSql.getRawSql();
String anotherSql = createAnotherSql(sql);
DataSource dataSource = config.getDataSource()
Connection connection = dataSource.getConnection();
PreparedStatement statement = connection.prepareStatement(anotherSql);
return statement.execute();
});
select * from employee where id = /*^ id */0
Query builders¶
Contents
The package org.seasar.doma.jdbc.builder
provides SQL builders.
When it is difficult to build a SQL statement with Queries, consider to use the SQL builders in Default method.
Search¶
SelectBuilder builder = SelectBuilder.newInstance(config);
builder.sql("select");
builder.sql("id").sql(",");
builder.sql("name").sql(",");
builder.sql("salary");
builder.sql("from Emp");
builder.sql("where");
builder.sql("job_type = ").literal(String.class, "fulltime");
builder.sql("and");
builder.sql("name like ").param(String.class, "S%");
builder.sql("and");
builder.sql("age in (").params(Integer.class, Arrays.asList(20, 30, 40)).sql(")");
List<Emp> employees = builder.getEntityResultList(Emp.class);
You can get result of the SQL execution in various ways.
Single record search¶
- getScalarSingleResult
- getOptionalScalarSingleResult
- getEntitySingleResult
- getOptionalEntitySingleResult
- getMapSingleResult
- getOptionalMapSingleResult
Multiple records search¶
- getScalarResultList
- getOptionalScalarResultList
- getEntityResultList
- getMapResultList
Stream search¶
- streamAsScalar
- streamAsOptionalScalar
- streamAsEntity
- streamAsMap
Insert¶
InsertBuilder builder = InsertBuilder.newInstance(config);
builder.sql("insert into Emp");
builder.sql("(name, salary)");
builder.sql("values (");
builder.param(String.class, "SMITH").sql(", ");
builder.param(BigDecimal.class, new BigDecimal(1000)).sql(")");
builder.execute();
Update¶
UpdateBuilder builder = UpdateBuilder.newInstance(config);
builder.sql("update Emp");
builder.sql("set");
builder.sql("name = ").param(String.class, "SMIHT").sql(",");
builder.sql("salary = ").param(BigDecimal.class, new BigDecimal("1000"));
builder.sql("where");
builder.sql("id = ").param(int.class, 10);
builder.execute();
Criteria API¶
Contents
- Criteria API
- Introduction
- Select statement
- Select settings (Entityql, NativeSql)
- Fetching (Entityql, NativeSql)
- Streaming (NativeSql)
- Select expression (NativeSql)
- Where expression (Entityql, NativeSql)
- Join expression
- Aggregate Functions (NativeSql)
- Group by expression (NativeSql)
- Having expression (NativeSql)
- Order by expression (Entityql, NativeSql)
- Distinct expression (Entityql, NativeSql)
- Limit and Offset expression (Entityql, NativeSql)
- For Update expression (Entityql, NativeSql)
- Union expression (NativeSql)
- Delete statement
- Insert statement
- Update statement
- Property Expressions (Entityql, NativeSql)
- Tips
- Sample projects
Introduction¶
There are two kinds of DSLs in the Criteria API:
- The Entityql DSL
- The NativeSql DSL
Both require predefined Entity classes and metamodel classes.
We use the following Entity classes to show you some examples:
@Entity(metamodel = @Metamodel)
public class Employee {
@Id private Integer employeeId;
private Integer employeeNo;
private String employeeName;
private Integer managerId;
private LocalDate hiredate;
private Salary salary;
private Integer departmentId;
private Integer addressId;
@Version private Integer version;
@OriginalStates private Employee states;
@Transient private Department department;
@Transient private Employee manager;
@Transient private Address address;
// getter and setter
}
@Entity(metamodel = @Metamodel)
public class Department {
@Id private Integer departmentId;
private Integer departmentNo;
private String departmentName;
private String location;
@Version private Integer version;
@OriginalStates private Department originalStates;
@Transient private List<Employee> employeeList = new ArrayList<>();
// getter and setter
}
Note that both of the above classes are annotated with @Entity(metamodel = @Metamodel)
.
The metamodel = @Metamodel
indicates that the annotated entity
has a corresponding metamodel class generated by Doma’s annotation processor .
In our examples, the metamodel classes are Employee_
and Department_
.
These metamodels allow you to make your query typesafe.
You can customize the name of the metamodels by the Metamodel annotation elements.
To customize all metamodels in bulk, you can use the annotation processor options. See Annotation processing and check the following options:
- doma.metamodel.enabled
- doma.metamodel.prefix
- doma.metamodel.suffix
Entityql DSL¶
The Entityql DSL can query and associate entities.
The entry point is the org.seasar.doma.jdbc.criteria.Entityql
class.
This class has the following methods:
- from
- insert
- delete
- update
You can instantiate the Entityql
class as follows:
Entityql entityql = new Entityql(config);
For example, to query Employee
and Department
entities and associate them, write as follows:
Employee_ e = new Employee_();
Department_ d = new Department_();
List<Employee> list =
entityql
.from(e)
.innerJoin(d, on -> on.eq(e.departmentId, d.departmentId))
.where(c -> c.eq(d.departmentName, "SALES"))
.associate(
e,
d,
(employee, department) -> {
employee.setDepartment(department);
department.getEmployeeList().add(employee);
})
.fetch();
The above query issues the following SQL statement:
select t0_.EMPLOYEE_ID, t0_.EMPLOYEE_NO, t0_.EMPLOYEE_NAME, t0_.MANAGER_ID,
t0_.HIREDATE, t0_.SALARY, t0_.DEPARTMENT_ID, t0_.ADDRESS_ID, t0_.VERSION,
t1_.DEPARTMENT_ID, t1_.DEPARTMENT_NO, t1_.DEPARTMENT_NAME, t1_.LOCATION, t1_.VERSION
from EMPLOYEE t0_ inner join DEPARTMENT t1_ on (t0_.DEPARTMENT_ID = t1_.DEPARTMENT_ID)
where t1_.DEPARTMENT_NAME = ?
NativeSql DSL¶
The NativeSql DSL can issue more complex SQL statements rather than the Entityql DSL. But note that the NativeSql DSL doesn’t support to associate entities.
The entry point is the org.seasar.doma.jdbc.criteria.NativeSql
class.
This class has the following methods:
- from
- delete
- insert
- update
You can instantiate the NativeSql
class as follows:
NativeSql nativeSql = new NativeSql(config);
For example, to query two columns with GROUP BY and HAVING clauses, write as follows:
Employee_ e = new Employee_();
Department_ d = new Department_();
List<Tuple2<Long, String>> list =
nativeSql
.from(e)
.innerJoin(d, on -> on.eq(e.departmentId, d.departmentId))
.groupBy(d.departmentName)
.having(c -> c.gt(count(), 3L))
.orderBy(c -> c.asc(count()))
.select(count(), d.departmentName)
.fetch();
The above query issues the following SQL statement:
select count(*), t1_.DEPARTMENT_NAME from EMPLOYEE t0_
inner join DEPARTMENT t1_ on (t0_.DEPARTMENT_ID = t1_.DEPARTMENT_ID)
group by t1_.DEPARTMENT_NAME
having count(*) > ?
order by count(*) asc
The difference between two DSLs¶
The biggest difference between two DSLs is that the Entityql DSL removes duplicated data from the fetched results, but the NativeSQL DSL doesn’t.
See the following example:
Department_ d = new Department_();
Employee_ e = new Employee_();
// (1) Use Entityql DSL
List<Department> list1 =
entityql.from(d).innerJoin(e, on -> on.eq(d.departmentId, e.departmentId)).fetch();
// (2) Use NativeSql DSL
List<Department> list2 =
nativeSql.from(d).innerJoin(e, on -> on.eq(d.departmentId, e.departmentId)).fetch();
System.out.println(list1.size()); // 3
System.out.println(list2.size()); // 14
Both (1) and (2) issue the same SQL statement as follows:
select t0_.DEPARTMENT_ID, t0_.DEPARTMENT_NO, t0_.DEPARTMENT_NAME, t0_.LOCATION,t0_.VERSION
from DEPARTMENT t0_
inner join EMPLOYEE t1_ on (t0_.DEPARTMENT_ID = t1_.DEPARTMENT_ID)
The ResultSet contains 14 rows, but the Entityql DSL returns only 3 rows because it removes duplicate Department entities. The Entityql DSL uses the id properties to know the uniqueness of the entities.
On the other hand, the NativeSql DSL returns the data as it retrieves from the database. It puts results into entity objects, but it handles them as the plain DTOs.
Select statement¶
Select settings (Entityql, NativeSql)¶
We support the following settings:
- allowEmptyWhere
- comment
- fetchSize
- maxRows
- queryTimeout
- sqlLogType
They are all optional. You can apply them as follows:
Employee_ e = new Employee_();
List<Employee> list = entityql.from(e, settings -> {
settings.setAllowEmptyWhere(false);
settings.setComment("all employees");
settings.setFetchSize(100);
settings.setMaxRows(100);
settings.setSqlLogType(SqlLogType.RAW);
settings.setQueryTimeout(1000);
}).fetch();
Fetching (Entityql, NativeSql)¶
Both Entityql DSL and NativeSql DSL support the following methods to fetch data from a database:
- fetch
- fetchOne
- fetchOptional
- stream
Employee_ e = new Employee_();
// The fetch method returns results as a list.
List<Employee> list =
entityql.from(e).fetch();
// The fetchOne method returns a single result. The result may be null.
Employee employee =
entityql.from(e).where(c -> c.eq(e.employeeId, 1)).fetchOne();
// The fetchOptional method returns a single result as an optional object.
Optional<Employee> optional =
entityql.from(e).where(c -> c.eq(e.employeeId, 1)).fetchOptional();
// The stream method returns results as a stream.
// The following code is equivalent to "entityql.from(e).fetch().stream()"
Stream<Employee> stream =
entityql.from(e).stream();
Streaming (NativeSql)¶
The NativeSql DSL supports the following methods:
- mapStream
- collect
Employee_ e = new Employee_();
// The mapStream method handles a stream.
Map<Integer, List<Employee>> map =
nativeSql
.from(e)
.mapStream(stream -> stream.collect(groupingBy(Employee::getDepartmentId)));
// The collect method is a shortcut of the mapStream method.
// The following code does the same thing with the above.
Map<Integer, List<Employee>> map2 =
nativeSql.from(e).collect(groupingBy(Employee::getDepartmentId));
These methods handle the stream that wraps a JDBC ResultSet. So they are useful to process a large ResultSet effectively.
Select expression (NativeSql)¶
To project columns, use the select method.
To project one column, pass one property to the select method as follows:
Employee_ e = new Employee_();
List<String> list = nativeSql.from(e).select(e.employeeName).fetch();
The above query issues the following SQL statement:
select t0_.EMPLOYEE_NAME from EMPLOYEE t0_
To project two or more columns, pass two or more properties to the select method as follows:
Employee_ e = new Employee_();
List<Tuple2<String, Integer>> list =
nativeSql.from(e).select(e.employeeName, e.employeeNo).fetch();
The above query issues the following SQL statement:
select t0_.EMPLOYEE_NAME, t0_.EMPLOYEE_NO from EMPLOYEE t0_
Up to 9 numbers, the column results are held by Tuple2
to Tuple9
.
For more than 9 numbers, the results are held by Row
.
Where expression (Entityql, NativeSql)¶
We support the following operators and predicates:
- eq - (=)
- ne - (<>)
- ge - (>=)
- gt - (>)
- le - (<=)
- lt - (<)
- isNull - (is null)
- isNotNull - (is not null)
- like
- notLike - (not like)
- between
- in
- notIn - (not in)
- exists
- notExists - (not exists)
We also support the following logical operators:
- and
- or
- not
Employee_ e = new Employee_();
List<Employee> list =
entityql
.from(e)
.where(
c -> {
c.eq(e.departmentId, 2);
c.isNotNull(e.managerId);
c.or(
() -> {
c.gt(e.salary, new Salary("1000"));
c.lt(e.salary, new Salary("2000"));
});
})
.fetch();
The above query issues the following SQL statement:
select t0_.EMPLOYEE_ID, t0_.EMPLOYEE_NO, t0_.EMPLOYEE_NAME, t0_.MANAGER_ID, t0_.HIREDATE,
t0_.SALARY, t0_.DEPARTMENT_ID, t0_.ADDRESS_ID, t0_.VERSION
from EMPLOYEE t0_
where t0_.DEPARTMENT_ID = ? and t0_.MANAGER_ID is not null or (t0_.SALARY > ? and t0_.SALARY < ?)
You can write a subquery as follows:
Employee_ e = new Employee_();
Employee_ e2 = new Employee_();
List<Employee> list =
entityql
.from(e)
.where(c -> c.in(e.employeeId, c.from(e2).select(e2.managerId)))
.orderBy(c -> c.asc(e.employeeId))
.fetch();
The above query issues the following SQL statement:
select t0_.EMPLOYEE_ID, t0_.EMPLOYEE_NO, t0_.EMPLOYEE_NAME, t0_.MANAGER_ID, t0_.HIREDATE,
t0_.SALARY, t0_.DEPARTMENT_ID, t0_.ADDRESS_ID, t0_.VERSION
from EMPLOYEE t0_
where t0_.EMPLOYEE_ID in (select t1_.MANAGER_ID from EMPLOYEE t1_)
order by t0_.EMPLOYEE_ID asc
Dynamic where expression (Entityql, NativeSql)¶
A where expression uses only evaluated operators to build a WHERE clause.
When every operators are not evaluated in a where expression, the built statement doesn’t have any WHERE clause.
As well as, when every operators are not evaluated in a logical operator expression, the built statement doesn’t have the logical operator expression.
For example, suppose that a where expression contains a conditional expression as follows:
Employee_ e = new Employee_();
List<Employee> list =
entityql
.from(e)
.where(
c -> {
c.eq(e.departmentId, 1);
if (name != null) {
c.like(e.employeeName, name);
}
})
.fetch();
In the case that the name
variable is null
, the like
expression is ignored.
The above query issues the following SQL statement:
select t0_.EMPLOYEE_ID, t0_.EMPLOYEE_NO, t0_.EMPLOYEE_NAME, t0_.MANAGER_ID, t0_.HIREDATE,
t0_.SALARY, t0_.DEPARTMENT_ID, t0_.ADDRESS_ID, t0_.VERSION
from EMPLOYEE t0_ where t0_.DEPARTMENT_ID = ?
Join expression¶
We support the following expressions:
- innerJoin - (inner join)
- leftJoin - (left outer join)
innerJoin (Entityql, NativeSql)¶
Employee_ e = new Employee_();
Department_ d = new Department_();
List<Employee> list =
entityql.from(e).innerJoin(d, on -> on.eq(e.departmentId, d.departmentId)).fetch();
The above query issues the following SQL statement:
select t0_.EMPLOYEE_ID, t0_.EMPLOYEE_NO, t0_.EMPLOYEE_NAME, t0_.MANAGER_ID, t0_.HIREDATE,
t0_.SALARY, t0_.DEPARTMENT_ID, t0_.ADDRESS_ID, t0_.VERSION
from EMPLOYEE t0_
inner join DEPARTMENT t1_ on (t0_.DEPARTMENT_ID = t1_.DEPARTMENT_ID)
leftJoin (Entityql, NativeSql)¶
Employee_ e = new Employee_();
Department_ d = new Department_();
List<Employee> list =
entityql.from(e).leftJoin(d, on -> on.eq(e.departmentId, d.departmentId)).fetch();
The above query issues the following SQL statement:
select t0_.EMPLOYEE_ID, t0_.EMPLOYEE_NO, t0_.EMPLOYEE_NAME, t0_.MANAGER_ID, t0_.HIREDATE,
t0_.SALARY, t0_.DEPARTMENT_ID, t0_.ADDRESS_ID, t0_.VERSION
from EMPLOYEE t0_
left outer join DEPARTMENT t1_ on (t0_.DEPARTMENT_ID = t1_.DEPARTMENT_ID)
associate (Entityql)¶
You can associate entities with the associate
operation in the Entityql DSL.
You have to use the associate
operation with join expression.
Employee_ e = new Employee_();
Department_ d = new Department_();
List<Employee> list =
entityql
.from(e)
.innerJoin(d, on -> on.eq(e.departmentId, d.departmentId))
.where(c -> c.eq(d.departmentName, "SALES"))
.associate(
e,
d,
(employee, department) -> {
employee.setDepartment(department);
department.getEmployeeList().add(employee);
})
.fetch();
The above query issues the following SQL statement:
select t0_.EMPLOYEE_ID, t0_.EMPLOYEE_NO, t0_.EMPLOYEE_NAME, t0_.MANAGER_ID,
t0_.HIREDATE, t0_.SALARY, t0_.DEPARTMENT_ID, t0_.ADDRESS_ID, t0_.VERSION,
t1_.DEPARTMENT_ID, t1_.DEPARTMENT_NO, t1_.DEPARTMENT_NAME, t1_.LOCATION, t1_.VERSION
from EMPLOYEE t0_ inner join DEPARTMENT t1_ on (t0_.DEPARTMENT_ID = t1_.DEPARTMENT_ID)
where t1_.DEPARTMENT_NAME = ?
You can associate many entities:
Employee_ e = new Employee_();
Department_ d = new Department_();
Address_ a = new Address_();
List<Employee> list =
entityql
.from(e)
.innerJoin(d, on -> on.eq(e.departmentId, d.departmentId))
.innerJoin(a, on -> on.eq(e.addressId, a.addressId))
.where(c -> c.eq(d.departmentName, "SALES"))
.associate(
e,
d,
(employee, department) -> {
employee.setDepartment(department);
department.getEmployeeList().add(employee);
})
.associate(e, a, Employee::setAddress)
.fetch();
Dynamic join expression (Entityql, NativeSql)¶
A join expression uses only evaluated operators to build a JOIN clause.
When every operators are not evaluated in a join expression, the built statement doesn’t have any JOIN clause.
For example, suppose that a join expression contains a conditional expression as follows:
Employee_ e = new Employee_();
Employee_ e2 = new Employee_();
List<Employee> list =
entityql
.from(e)
.innerJoin(
e2,
on -> {
if (join) {
on.eq(e.managerId, e2.employeeId);
}
})
.fetch();
In the case that the join
variable is false
, the on
expression is ignored.
The above query issues the following SQL statement:
select t0_.EMPLOYEE_ID, t0_.EMPLOYEE_NO, t0_.EMPLOYEE_NAME, t0_.MANAGER_ID, t0_.HIREDATE,
t0_.SALARY, t0_.DEPARTMENT_ID, t0_.ADDRESS_ID, t0_.VERSION
from EMPLOYEE t0_
Dynamic associate (Entityql)¶
When you use the above dynamic join expression, the association must be optional.
To do it, pass the result of AssociationOption.optional()
to the associate method:
Employee_ e = new Employee_();
Department_ d = new Department_();
List<Employee> list =
entityql
.from(e)
.innerJoin(
d,
on -> {
if (join) {
on.eq(e.departmentId, d.departmentId);
}
})
.associate(
e,
d,
(employee, department) -> {
employee.setDepartment(department);
department.getEmployeeList().add(employee);
},
AssociationOption.optional())
.fetch();
Fetch multiple entities at once (NativeSql)¶
When the select method accepts multiple metamodels, the fetch method returns the tuple of the corresponding entities:
Employee_ e = new Employee_();
Department_ d = new Department_();
List<Tuple2<Department, Employee>> list =
nativeSql
.from(d)
.leftJoin(e, on -> on.eq(d.departmentId, e.departmentId))
.where(c -> c.eq(d.departmentId, 4))
.select(d, e)
.fetch();
The above query issues the following SQL statement:
select t0_.DEPARTMENT_ID, t0_.DEPARTMENT_NO, t0_.DEPARTMENT_NAME, t0_.LOCATION,
t0_.VERSION, t1_.EMPLOYEE_ID, t1_.EMPLOYEE_NO, t1_.EMPLOYEE_NAME, t1_.MANAGER_ID,
t1_.HIREDATE, t1_.SALARY, t1_.DEPARTMENT_ID, t1_.ADDRESS_ID, t1_.VERSION
from DEPARTMENT t0_ left outer join EMPLOYEE t1_ on (t0_.DEPARTMENT_ID = t1_.DEPARTMENT_ID)
where t0_.DEPARTMENT_ID = ?
The entity included in the tuple should be null when the all properties of the entity are null.
Aggregate Functions (NativeSql)¶
We support the following aggregate functions:
- avg(property)
- count()
- count(property)
- countDistinct(property)
- max(property)
- min(property)
- sum(property)
These are defined in the org.seasar.doma.jdbc.criteria.expression.Expressions
class.
Use them with static import.
For example, you can pass the sum
function to the select method:
Employee_ e = new Employee_();
Salary salary = nativeSql.from(e).select(sum(e.salary)).fetchOne();
The above query issues the following SQL statement:
select sum(t0_.SALARY) from EMPLOYEE t0_
Group by expression (NativeSql)¶
Employee_ e = new Employee_();
List<Tuple2<Integer, Long>> list =
nativeSql.from(e).groupBy(e.departmentId).select(e.departmentId, count()).fetch();
The above query issues the following SQL statement:
select t0_.DEPARTMENT_ID, count(*) from EMPLOYEE t0_ group by t0_.DEPARTMENT_ID
When you don’t specify a group by expression, the expression is inferred from the select expression automatically. So the following code issue the same SQL statement above:
Employee_ e = new Employee_();
List<Tuple2<Integer, Long>> list =
nativeSql.from(e).select(e.departmentId, count()).fetch();
Having expression (NativeSql)¶
We support the following operators:
- eq - (=)
- ne - (<>)
- ge - (>=)
- gt - (>)
- le - (<=)
- lt - (<)
We also support the following logical operators:
- and
- or
- not
Employee_ e = new Employee_();
Department_ d = new Department_();
List<Tuple2<Long, String>> list =
nativeSql
.from(e)
.innerJoin(d, on -> on.eq(e.departmentId, d.departmentId))
.having(c -> c.gt(count(), 3L))
.orderBy(c -> c.asc(count()))
.select(count(), d.departmentName)
.fetch();
The above query issues the following SQL statement:
select count(*), t1_.DEPARTMENT_NAME
from EMPLOYEE t0_
inner join DEPARTMENT t1_ on (t0_.DEPARTMENT_ID = t1_.DEPARTMENT_ID)
group by t1_.DEPARTMENT_NAME having count(*) > ? or (min(t0_.SALARY) <= ?)
order by count(*) asc
Dynamic having expression (NativeSql)¶
A having expression uses only evaluated operators to build a HAVING clause.
When every operators are not evaluated in a having expression, the built statement doesn’t have any HAVING clause.
As well as, when every operators are not evaluated in a logical operator expression, the built statement doesn’t have the logical operator expression.
Order by expression (Entityql, NativeSql)¶
We support the following order operations:
- asc
- desc
Employee_ e = new Employee_();
List<Employee> list =
entityql
.from(e)
.orderBy(
c -> {
c.asc(e.departmentId);
c.desc(e.salary);
})
.fetch();
The above query issues the following SQL statement:
select t0_.EMPLOYEE_ID, t0_.EMPLOYEE_NO, t0_.EMPLOYEE_NAME, t0_.MANAGER_ID, t0_.HIREDATE,
t0_.SALARY, t0_.DEPARTMENT_ID, t0_.ADDRESS_ID, t0_.VERSION
from EMPLOYEE t0_
order by t0_.DEPARTMENT_ID asc, t0_.SALARY desc
Dynamic order by expression (NativeSql)¶
An order by expression uses only evaluated operators to build an ORDER BY clause.
When every operators are not evaluated in a order by expression, the built statement doesn’t have any ORDER BY clause.
Distinct expression (Entityql, NativeSql)¶
List<Department> list =
nativeSql
.from(d)
.distinct()
.leftJoin(e, on -> on.eq(d.departmentId, e.departmentId))
.fetch();
The above query issues the following SQL statement:
select distinct t0_.DEPARTMENT_ID, t0_.DEPARTMENT_NO, t0_.DEPARTMENT_NAME,
t0_.LOCATION, t0_.VERSION
from DEPARTMENT t0_
left outer join EMPLOYEE t1_ on (t0_.DEPARTMENT_ID = t1_.DEPARTMENT_ID)
Limit and Offset expression (Entityql, NativeSql)¶
Employee_ e = new Employee_();
List<Employee> list =
nativeSql.from(e).limit(5).offset(3).orderBy(c -> c.asc(e.employeeNo)).fetch();
The above query issues the following SQL statement:
select t0_.EMPLOYEE_ID, t0_.EMPLOYEE_NO, t0_.EMPLOYEE_NAME, t0_.MANAGER_ID, t0_.HIREDATE,
t0_.SALARY, t0_.DEPARTMENT_ID, t0_.ADDRESS_ID, t0_.VERSION
from EMPLOYEE t0_
order by t0_.EMPLOYEE_NO asc
offset 3 rows fetch first 5 rows only
Dynamic Limit and Offset expression (Entityql, NativeSql)¶
A limit expressions uses only non-null value to build a FETCH FIRST clause. When the value is null ,the built statement doesn’t have any FETCH FIRST clause.
As well as, an offset expressions uses only non-null value to build a OFFSET clause. When the value is null ,the built statement doesn’t have any OFFSET clause.
For Update expression (Entityql, NativeSql)¶
Employee_ e = new Employee_();
List<Employee> list = nativeSql.from(e).where(c -> c.eq(e.employeeId, 1)).forUpdate().fetch();
The above query issues the following SQL statement:
select t0_.EMPLOYEE_ID, t0_.EMPLOYEE_NO, t0_.EMPLOYEE_NAME, t0_.MANAGER_ID, t0_.HIREDATE,
t0_.SALARY, t0_.DEPARTMENT_ID, t0_.ADDRESS_ID, t0_.VERSION
from EMPLOYEE t0_
where t0_.EMPLOYEE_ID = ?
for update
Union expression (NativeSql)¶
We support the following expressions:
- union
- unionAll - (union all)
Employee_ e = new Employee_();
Department_ d = new Department_();
List<Tuple2<Integer, String>> list =
nativeSql
.from(e)
.select(e.employeeId, e.employeeName)
.union(nativeSql.from(d)
.select(d.departmentId, d.departmentName))
.fetch();
The above query issues the following SQL statement:
select t0_.EMPLOYEE_ID, t0_.EMPLOYEE_NAME from EMPLOYEE t0_
union
select t0_.DEPARTMENT_ID, t0_.DEPARTMENT_NAME from DEPARTMENT t0_
The order by expression with index is supported:
Employee_ e = new Employee_();
Department_ d = new Department_();
List<Tuple2<Integer, String>> list =
nativeSql
.from(e)
.select(e.employeeId, e.employeeName)
.union(nativeSql.from(d)
.select(d.departmentId, d.departmentName))
.orderBy(c -> c.asc(2))
.fetch();
Delete statement¶
For the specification of the where expression, see Where expression (Entityql, NativeSql). The same rule is applied to delete statements.
Delete settings (Entityql, NativeSql)¶
We support the following settings:
- allowEmptyWhere
- batchSize
- comment
- ignoreVersion
- queryTimeout
- sqlLogType
- suppressOptimisticLockException
They are all optional.
You can apply them as follows:
Employee_ e = new Employee_();
int count = nativeSql.delete(e, settings -> {
settings.setAllowEmptyWhere(true);
settings.setBatchSize(20);
settings.setComment("delete all");
settings.setIgnoreVersion(true);
settings.setQueryTimeout(1000);
settings.setSqlLogType(SqlLogType.RAW);
settings.setSuppressOptimisticLockException(true);
}).execute();
Note
If you want to build a delete statement without a WHERE clause, you have to enable the allowEmptyWhere setting.
Delete statement (Entityql)¶
Employee_ e = new Employee_();
Employee employee = entityql.from(e).where(c -> c.eq(e.employeeId, 5)).fetchOne();
Result<Employee> result = entityql.delete(e, employee).execute();
The above query issues the following SQL statement:
delete from EMPLOYEE where EMPLOYEE_ID = ? and VERSION = ?
Batch Delete is also supported:
Employee_ e = new Employee_();
List<Employee> employees =
entityql.from(e).where(c -> c.in(e.employeeId, Arrays.asList(5, 6))).fetch();
BatchResult<Employee> result = entityql.delete(e, employees).execute();
The execute method may throw following exceptions:
- OptimisticLockException: if the entity has a version property and an update count is 0
Delete statement (NativeSql)¶
Employee_ e = new Employee_();
int count = nativeSql.delete(e).where(c -> c.ge(e.salary, new Salary("2000"))).execute();
The above query issues the following SQL statement:
delete from EMPLOYEE t0_ where t0_.SALARY >= ?
Insert statement¶
Insert settings (Entityql, NativeSql)¶
We support the following settings:
- comment
- queryTimeout
- sqlLogType
- batchSize
They are all optional.
You can apply them as follows:
Department_ d = new Department_();
int count =
nativeSql
.insert(d, settings -> {
settings.setComment("insert department");
settings.setQueryTimeout(1000);
settings.setSqlLogType(SqlLogType.RAW);
settings.setBatchSize(20);
})
.values(
c -> {
c.value(d.departmentId, 99);
c.value(d.departmentNo, 99);
c.value(d.departmentName, "aaa");
c.value(d.location, "bbb");
c.value(d.version, 1);
})
.execute();
Insert statement (Entityql)¶
Department_ d = new Department_();
Department department = new Department();
department.setDepartmentId(99);
department.setDepartmentNo(99);
department.setDepartmentName("aaa");
department.setLocation("bbb");
Result<Department> result = entityql.insert(d, department).execute();
The above query issues the following SQL statement:
insert into DEPARTMENT (DEPARTMENT_ID, DEPARTMENT_NO, DEPARTMENT_NAME, LOCATION, VERSION)
values (?, ?, ?, ?, ?)
Batch Insert is also supported:
Department_ d = new Department_();
Department department = ...;
Department department2 = ...;
List<Department> departments = Arrays.asList(department, department2);
BatchResult<Department> result = entityql.insert(d, departments).execute();
The execute method may throw following exceptions:
- UniqueConstraintException: if an unique constraint is violated
Insert statement (NativeSql)¶
Department_ d = new Department_();
int count =
nativeSql
.insert(d)
.values(
c -> {
c.value(d.departmentId, 99);
c.value(d.departmentNo, 99);
c.value(d.departmentName, "aaa");
c.value(d.location, "bbb");
c.value(d.version, 1);
})
.execute();
The above query issues the following SQL statement:
insert into DEPARTMENT (DEPARTMENT_ID, DEPARTMENT_NO, DEPARTMENT_NAME, LOCATION, VERSION)
values (?, ?, ?, ?, ?)
The execute method may throw following exceptions:
- UniqueConstraintException: if an unique constraint is violated
We also support the INSERT SELECT syntax as follows:
Department_ da = new Department_("DEPARTMENT_ARCHIVE");
Department_ d = new Department_();
int count =
nativeSql
.insert(da)
.select(c -> c.from(d).where(cc -> cc.in(d.departmentId, Arrays.asList(1, 2))).select())
.execute();
The above query issues the following SQL statement:
insert into DEPARTMENT_ARCHIVE (DEPARTMENT_ID, DEPARTMENT_NO, DEPARTMENT_NAME,
LOCATION, VERSION) select t0_.DEPARTMENT_ID, t0_.DEPARTMENT_NO, t0_.DEPARTMENT_NAME,
t0_.LOCATION, t0_.VERSION from DEPARTMENT t0_ where t0_.DEPARTMENT_ID in (?, ?)
Update statement¶
For the specification of the where expression, see Where expression (Entityql, NativeSql). The same rule is applied to update statements.
Update settings (Entityql, NativeSql)¶
We support the following settings:
- allowEmptyWhere
- batchSize
- comment
- ignoreVersion
- queryTimeout
- sqlLogType
- suppressOptimisticLockException
They are all optional.
You can apply them as follows:
Employee_ e = new Employee_();
int count = nativeSql.update(e, settings -> {
settings.setAllowEmptyWhere(true);
settings.setBatchSize(20);
settings.setComment("update all");
settings.setIgnoreVersion(true);
settings.setQueryTimeout(1000);
settings.setSqlLogType(SqlLogType.RAW);
settings.setSuppressOptimisticLockException(true);
}).set(c -> {
c.value(e.employeeName, "aaa");
}).execute();
Note
If you want to build a update statement without a WHERE clause, you have to enable the allowEmptyWhere setting.
Update statement (Entityql)¶
Employee_ e = new Employee_();
Employee employee = entityql.from(e).where(c -> c.eq(e.employeeId, 5)).fetchOne();
employee.setEmployeeName("aaa");
employee.setSalary(new Salary("2000"));
Result<Employee> result = entityql.update(e, employee).execute();
The above query issues the following SQL statement:
update EMPLOYEE set EMPLOYEE_NAME = ?, SALARY = ?, VERSION = ? + 1
where EMPLOYEE_ID = ? and VERSION = ?
Batch Update is also supported:
Employee_ e = new Employee_();
Employee employee = ...;
Employee employee2 = ...;
List<Employee> departments = Arrays.asList(employee, employee2);
BatchResult<Employee> result = entityql.update(e, employees).execute();
The execute method may throw following exceptions:
- OptimisticLockException: if the entity has a version property and an update count is 0
- UniqueConstraintException: if an unique constraint is violated
Update statement (NativeSql)¶
Employee_ e = new Employee_();
int count =
nativeSql
.update(e)
.set(c -> c.value(e.departmentId, 3))
.where(
c -> {
c.isNotNull(e.managerId);
c.ge(e.salary, new Salary("2000"));
})
.execute();
The above query issues the following SQL statement:
update EMPLOYEE t0_ set t0_.DEPARTMENT_ID = ?
where t0_.MANAGER_ID is not null and t0_.SALARY >= ?
The execute method may throw following exceptions:
- UniqueConstraintException: if an unique constraint is violated
Property Expressions (Entityql, NativeSql)¶
We support the following expressions:
Arithmetic Expressions:
- add - (+)
- sub - (-)
- mul - (*)
- div - (/)
- mod - (%)
String Expression:
- concat
Literal Expression:
- literal (for String and int)
These are defined in the org.seasar.doma.jdbc.criteria.expression.Expressions
class.
Use them with static import.
The expressions are useful to update a column relative to current column value:
Employee_ e = new Employee_();
int count =
nativeSql
.update(e)
.set(
c -> {
c.value(e.employeeName, concat("[", concat(e.employeeName, "]")));
c.value(e.version, add(e.version, literal(1)));
})
.where(c -> c.eq(e.employeeId, 1))
.execute();
The above query issues the following SQL statement:
update EMPLOYEE t0_ set t0_.EMPLOYEE_NAME = concat('[', concat(t0_.EMPLOYEE_NAME, ']')),
t0_.VERSION = (t0_.VERSION + 1)
where t0_.EMPLOYEE_ID = ?
Tips¶
Execution in Dao (Entityql, NativeSql)¶
It is useful to execute DSLs in the default method of the Dao interface.
To get a config
object, call Config.get(this)
in the default method as follows:
@Dao
public interface EmployeeDao {
default Optional<Employee> selectById(Integer id) {
Entityql entityql = new Entityql(Config.get(this));
Employee_ e = new Employee_();
return entityql.from(e).where(c -> c.eq(e.employeeId, id)).fetchOptional();
}
}
Overwriting the table name (Entityql, NativeSql)¶
A metamodel constructor accepts the qualified table name and the metamodel overwrites its table name.
It is useful to handle two tables that have the same data structure:
Department_ da = new Department_("DEPARTMENT_ARCHIVE");
Department_ d = new Department_();
int count =
nativeSql
.insert(da)
.select(c -> c.from(d).select())
.execute();
insert into DEPARTMENT_ARCHIVE (DEPARTMENT_ID, DEPARTMENT_NO, DEPARTMENT_NAME,
LOCATION, VERSION) select t0_.DEPARTMENT_ID, t0_.DEPARTMENT_NO, t0_.DEPARTMENT_NAME,
t0_.LOCATION, t0_.VERSION from DEPARTMENT t0_
Debugging (Entityql, NativeSql)¶
To know the SQL statement built by the DSLs, use the asSql
method:
Department_ d = new Department_();
Listable<Department> stmt = entityql.from(d).where(c -> c.eq(d.departmentName, "SALES"));
Sql<?> sql = stmt.asSql();
System.out.printf("Raw SQL : %s\n", sql.getRawSql());
System.out.printf("Formatted SQL: %s\n", sql.getFormattedSql());
The above code prints as follows:
Raw SQL : select t0_.DEPARTMENT_ID, t0_.DEPARTMENT_NO, t0_.DEPARTMENT_NAME, t0_.LOCATION, t0_.VERSION from DEPARTMENT t0_ where t0_.DEPARTMENT_NAME = ?
Formatted SQL: select t0_.DEPARTMENT_ID, t0_.DEPARTMENT_NO, t0_.DEPARTMENT_NAME, t0_.LOCATION, t0_.VERSION from DEPARTMENT t0_ where t0_.DEPARTMENT_NAME = 'SALES'
The asSql
method doesn’t issue the SQL statement to your Database.
It only builds the SQL statement and return it as an Sql
object.
You can also get the Sql
object by calling the peek
method.
Department_ d = new Department_();
List<String> locations = nativeSql
.from(d)
.peek(System.out::println)
.where(c -> c.eq(d.departmentName, "SALES"))
.peek(System.out::println)
.orderBy(c -> c.asc(d.location))
.peek(sql -> System.out.println(sql.getFormattedSql()))
.select(d.location)
.peek(sql -> System.out.println(sql.getFormattedSql()))
.fetch();
The above code prints as follows:
select t0_.DEPARTMENT_ID, t0_.DEPARTMENT_NO, t0_.DEPARTMENT_NAME, t0_.LOCATION, t0_.VERSION from DEPARTMENT t0_
select t0_.DEPARTMENT_ID, t0_.DEPARTMENT_NO, t0_.DEPARTMENT_NAME, t0_.LOCATION, t0_.VERSION from DEPARTMENT t0_ where t0_.DEPARTMENT_NAME = ?
select t0_.DEPARTMENT_ID, t0_.DEPARTMENT_NO, t0_.DEPARTMENT_NAME, t0_.LOCATION, t0_.VERSION from DEPARTMENT t0_ where t0_.DEPARTMENT_NAME = 'SALES' order by t0_.LOCATION asc
select t0_.LOCATION from DEPARTMENT t0_ where t0_.DEPARTMENT_NAME = 'SALES' order by t0_.LOCATION asc
SQL templates¶
Contents
Overview¶
Doma supports SQL templates, called “two-way SQL”. “Two-way SQL” means that the SQL templates can be used in two ways:
- To build dynamic SQL statements from the templates.
- To execute the templates in SQL tools as they are.
Every SQL template must correspond to a DAO method. For example, suppose you have the pair of an SQL template and a DAO method as follows:
select * from employee where employee_id = /* employeeId */99
Employee selectById(Integer employeeId);
The employeeId
expression enclosed between /*
and */
corresponds to
the method parameter “employeeId” of the DAO.
In runtime, the SQL comment and following number /* employeeId */99
is replaced with a bind variable ?
and the method parameter “employeeId” is passed to the variable.
The SQL statement generated from the SQL template is as follows:
select * from employee where employee_id = ?
The number 99
in the SQL template is test data and never used in runtime.
The test data is only useful when you execute the SQL template as is.
In other words, you can check whether the SQL template is grammatically correct with your favorite SQL tools.
Each SQL template is represented either a text file or an annotation.
SQL templates in files¶
You can specify SQL templates in text files:
@Dao
public interface EmployeeDao {
@Select
Employee selectById(Integer employeeId);
@Delete(sqlFile = true)
int deleteByName(Employee employee);
}
Above selectById
and deleteByName
methods are mapped onto their own SQL files.
DAO methods must be annotated with one of following annotations:
- @Select
- @Insert(sqlFile = true)
- @Update(sqlFile = true)
- @Delete(sqlFile = true)
- @BatchInsert(sqlFile = true)
- @BatchUpdate(sqlFile = true)
- @BatchDelete(sqlFile = true)
Location¶
The SQL files must be located in directories below a “META-INF” directory which is included in CLASSPATH.
Format of file path¶
The SQL file path must follow the following format:
META-INF/path-format-of-dao-interface/dao-method.sql
For example, when the DAO interface name is aaa.bbb.EmployeeDao
and the DAO method name is selectById
,
the SQL file path is as follows:
META-INF/aaa/bbb/EmployeeDao/selectById.sql
Dependency on a specific RDBMS¶
You can specify dependency on a specific RDBMS by file name. To do this, put the hyphen “-” and RDBMS name before the extension “.sql”. For example, the file path specific to PostgreSQL is as follows:
META-INF/aaa/bbb/EmployeeDao/selectById-postgres.sql
The SQL files specific to RDBMSs are given priority. For example, in the environment where PostgreSQL is used, “META-INF/aaa/bbb/EmployeeDao/selectById-postgres.sql” is chosen instead of “META-INF/aaa/bbb/EmployeeDao/selectById.sql”.
The RDBMS names are stem from dialects:
RDBMS | Dialect | RDBMS Name |
---|---|---|
DB2 | Db2Dialect | db2 |
H2 Database | H2Dialect | h2 |
HSQLDB | HsqldbDialect | hsqldb |
Microsoft SQL Server | MssqlDialect | mssql |
MySQL | MySqlDialect | mysql |
Oracle Database | OracleDialect | oracle |
PostgreSQL | PostgresDialect | postgres |
SQLite | SqliteDialect | sqlite |
SQL templates in annotations¶
You can specify SQL templates to DAO methods with the @Sql
annotation:
@Dao
public interface EmployeeDao {
@Sql("select * from employee where employee_id = /* employeeId */99")
@Select
Employee selectById(Integer employeeId);
@Sql("delete from employee where employee_name = /* employee.employeeName */'aaa'")
@Delete
int deleteByName(Employee employee);
}
The @Sql
annotation must be combined with following annotations:
- @Select
- @Script
- @Insert
- @Update
- @Delete
- @BatchInsert
- @BatchUpdate
- @BatchDelete
Directives¶
In SQL templates, the SQL comments following the specific rules are recognised as directives. Supported directives are as follows:
- Bind variable directive
- Literal variable directive
- Embedded variable directive
- Condition directive
- Loop directive
- Expansion directive
- Population directive
Note
See also Expression language for information of the expression language available in directives.
Bind variable directive¶
Bind variable directive is represented with the format /*...*/
.
The expression enclosed between /*
and */
is evaluated and
its evaluation result is passed to bind variable in SQL statement.
The directive must be followed by test data, which is never used in runtime.
Basic and domain parameters¶
The parameter whose type is one of Basic classes and Domain classes is recognised as a bind variable.
The following example is the pair of a DAO method and an SQL template:
Employee selectById(Integer employeeId);
select * from employee where employee_id = /* employeeId */99
The following SQL statement is generated from the SQL template:
select * from employee where employee_id = ?
Parameters in IN clause¶
The parameter whose type is a subtype of java.lang.Iterable
or an array type is
recognised as bind variables in IN clause.
The type argument of java.lang.Iterable
must be one of Basic classes and Domain classes.
The directives must be followed by test data enclosed between (
and )
.
The following example is the pair of a DAO method and an SQL template:
List<Employee> selectByIdList(List<Integer> employeeIdList);
select * from employee where employee_id in /* employeeIdList */(1,2,3)
In case that the employeeIdList
contains five elements,
the following SQL statement is generated from the SQL template:
select * from employee where employee_id in (?, ?, ?, ?, ?)
In case that the employeeIdList
is empty,
the IN clause is replaced with in (null)
in runtime:
select * from employee where employee_id in (null)
Literal variable directive¶
Literal variable directive is represented with the format /*^...*/
.
The expression enclosed between /*^
and */
is evaluated and
its evaluation result is converted to literal format to be embedded in SQL statement.
The directive must be followed by test data, which is never used in runtime.
The following example is the pair of a DAO method and an SQL template:
Employee selectByCode(String code);
select * from employee where code = /*^ code */'test'
The DAO method is invoked as follows:
EmployeeDao dao = new EmployeeDaoImpl();
List<Employee> list = dao.selectByCode("abc");
The generated SQL statement is as follows:
select * from employee where code = 'abc'
Note
Literal variable directives are helpful to avoid bind variables and fix SQL plans.
Warning
Literal variable directives do not escape parameters for SQL injection.
But the directives reject parameters containing the single quotation '
.
Embedded variable directive¶
Embedded variable directive is represented with the format /*#...*/
.
The expression enclosed between /*#
and */
is evaluated and
its evaluation result is embedded in SQL statement.
The following example is the pair of a DAO method and an SQL template:
List<Employee> selectAll(BigDecimal salary, String orderyBy);
select * from employee where salary > /* salary */100 /*# orderBy */
The DAO method is invoked as follows:
EmployeeDao dao = new EmployeeDaoImpl();
BigDecimal salary = new BigDecimal(1000);
String orderBy = "order by salary asc, employee_name";
List<Employee> list = dao.selectAll(salary, orderBy);
The generated SQL statement is as follows:
select * from employee where salary > ? order by salary asc, employee_name
Note
Embedded variable directives are helpful to build SQL fragments such as ORDER BY clause.
Warning
To prevent SQL injection vulnerabilities, embedded variable directives reject parameters containing the following values:
- a single quotation
'
- a semi colon
;
- two hyphen
--
- a slash and an asterisk
/*
Condition directive¶
Condition directive allows you to build SQL statements conditionally.
Synopsis¶
/*%if condition*/
...
/*%elseif condition2*/
...
/*%elseif condition3*/
...
/*%else*/
...
/*%end*/
The expressions condition
, condition2
, and condition3
must be evaluated
to either boolean
or java.lang.Boolean
.
The elseif
directives and the else
directive are optional.
if¶
Suppose you have the following SQL template:
select * from employee where
/*%if employeeId != null */
employee_id = /* employeeId */99
/*%end*/
If the employeeId
is not null
, the generated SQL statement is as follows:
select * from employee where employee_id = ?
If the employeeId
is null
, the generated SQL statement is as follows:
select * from employee
The SQL keyword where
is removed automatically.
elseif and else¶
Suppose you have the following SQL template:
select
*
from
employee
where
/*%if employeeId != null */
employee_id = /* employeeId */9999
/*%elseif department_id != null */
and
department_id = /* departmentId */99
/*%else*/
and
department_id is null
/*%end*/
If the employeeId != null
is evaluated true
, the generated SQL statement is as follows:
select
*
from
employee
where
employee_id = ?
If the employeeId == null && department_id != null
is evaluated true
,
the generated SQL statement is as follows:
select
*
from
employee
where
department_id = ?
The SQL keyword and
followed by department_id
is remove automatically:
If the employeeId == null && department_id == null
is evaluated true
,
the generated SQL statement is as follows:
select
*
from
employee
where
department_id is null
The SQL keyword and
followed by department_id
is remove automatically:
Nested condition directive¶
You can nest condition directives as follows:
select * from employee where
/*%if employeeId != null */
employee_id = /* employeeId */99
/*%if employeeName != null */
and
employee_name = /* employeeName */'hoge'
/*%else*/
and
employee_name is null
/*%end*/
/*%end*/
Removal of clauses on the condition directive¶
Following clauses can become unnecessary on the condition directive:
- WHERE
- HAVING
- ORDER BY
- GROUP BY
In the case, they are removed automatically.
Suppose you have the following SQL template:
select * from employee where
/*%if employeeId != null */
employee_id = /* employeeId */99
/*%end*/
If the employeeId != null
is evaluated false
,
the generated SQL statement is as follows:
select * from employee
Because the SQL clause where
followed by /*%if ...*/
is unnecessary,
it is removed automatically.
Removal of AND and OR keywords on the condition directives¶
AND and OR keywords can become unnecessary on the condition directive. In the case, they are removed automatically.
Suppose you have the following SQL template:
select * from employee where
/*%if employeeId != null */
employee_id = /* employeeId */99
/*%end*/
and employeeName like 's%'
If the employeeId != null
is evaluated false
,
the generated SQL statement is as follows:
select * from employee where employeeName like 's%'
Because the SQL keyword and
following /*%end*/
is unnecessary,
it is removed automatically.
Restriction on condition directive¶
/*%if condition*/
and /*%end*/
must be included in
same SQL clause and in same statement level.
The following template is invalid, because /*%if condition*/
is
in the FROM clause and /*%end*/
is in the WHERE clause:
select * from employee /*%if employeeId != null */
where employee_id = /* employeeId */99 /*%end*/
The following template is invalid, because /*%if condition*/
is
in the outer statement and /*%end*/
is in the inner statement:
select * from employee
where employee_id in /*%if departmentId != null */(select ... /*%end*/ ...)
Loop directive¶
Loop directive allows you to build SQL statements using loop.
Synopsis¶
/*%for item : sequence*/
...
/*%end*/
The item
is the loop variable.
The expression sequence
must be evaluated to a subtype of java.lang.Iterable
or an array type.
In the inside between /*%for item : sequence*/
and /*%end*/
,
two extra loop variables are available:
item_index: | The index (0-based number) of the current item in the loop |
---|---|
item_has_next: | Boolean value that tells if the current item is the last in the sequence or not |
The prefix item
indicates the name of the loop variable.
for and item_has_next¶
Suppose you have the following SQL template:
select * from employee where
/*%for name : names */
employee_name like /* name */'hoge'
/*%if name_has_next */
/*# "or" */
/*%end */
/*%end*/
If the sequence names
contains three items,
the generated SQL statement is as follows:
select * from employee where
employee_name like ?
or
employee_name like ?
or
employee_name like ?
Removal of clauses on the loop directive¶
Following clauses can become unnecessary on the loop directive:
- WHERE
- HAVING
- ORDER BY
- GROUP BY
In the case, they are removed automatically.
Suppose you have the following SQL template:
select * from employee where
/*%for name : names */
employee_name like /* name */'hoge'
/*%if name_has_next */
/*# "or" */
/*%end */
/*%end*/
If the sequence names
is empty,
the generated SQL statement is as follows:
select * from employee
Because the SQL clause where
followed by /*%for ...*/
is unnecessary,
it is removed automatically.
Removal of AND and OR keywords on the loop directive¶
AND and OR keywords can become unnecessary on the loop directive. In the case, they are removed automatically.
Suppose you have the following SQL template:
select * from employee where
/*%for name : names */
employee_name like /* name */'hoge'
/*%if name_has_next */
/*# "or" */
/*%end */
/*%end*/
or
salary > 1000
If the sequence names
is empty,
the generated SQL statement is as follows:
select * from employee where salary > 1000
Because the SQL keyword or
following /*%end*/
is unnecessary,
it is removed automatically.
Restriction on loop directive¶
/*%for ...*/
and /*%end*/
must be included in
same SQL clause and in same statement level.
See also Restriction on condition directive.
Expansion directive¶
Expansion directive allows you to build column list of SELECT clause from the definition of Entity classes.
Synopsis¶
/*%expand alias*/
The expression alias
is optional.
If it is specified, it must be evaluated to java.lang.String
.
The directive must be followed by the asterisk *
.
expand¶
Suppose you have the following SQL template and the entity class mapped to the template:
select /*%expand*/* from employee
@Entity
public class Employee {
Integer id;
String name;
Integer age;
}
The generated SQL statement is as follows:
select id, name, age from employee
If you specify an alias to the table, specify same alias to the expansion directive:
select /*%expand "e" */* from employee e
The generated SQL statement is as follows:
select e.id, e.name, e.age from employee e
Population directive¶
Population directive allows you to build column list of UPDATE SET clause from the definition of Entity classes.
Synopsis¶
/*%populate*/
populate¶
Suppose you have the following SQL template and the entity class mapped to the template:
update employee set /*%populate*/ id = id where age < 30
@Entity
public class Employee {
Integer id;
String name;
Integer age;
}
The generated SQL statement is as follows:
update employee set id = ?, name = ?, age = ? where age < 30
Comments¶
This section show you how to distinguish between directives and normal SQL comments.
Single line comment¶
Always the string consisting of two hyphens --
is a single line comment.
It is never directive.
Multi line comment¶
If the character following /*
is not permissible as the first character in a Java identifier
and it is neither %
, #
, @
, "
nor '
,
the /*
is beginning of a multi line comment.
The followings are the beginning of a multi line comment:
- /**…*/
- /*+…*/
- /*=…*/
- /*:…*/
- /*;…*/
- /*(…*/
- /*)…*/
- /*&…*/
In other hand, the followings are the beginning of a directive:
- /* …*/
- /*a…*/
- /*$…*/
- /*@…*/
- /*”…*/
- /*’…*/
- /*#…*/
- /*%…*/
Note
We recommend you always use /**...*/
to begin multi line comments because it is simple.
Expression language¶
Contents
You can write simple expressions in directives of SQL templates. The grammar is almost the same as Java. However, not everything is possible that Java can do.
Note
Especially, the big difference is how to use optional types like java.util.Optional
.
In the expression, a value of Optional
type is always converted
to a value of the element type automatically.
For example a value of the Optional<String>
type is treated as a value of String
type.
Therefore, we can’t call methods of Optional
type,
nor do we call methods which have an Optional
type in the parameters.
When you want to check existence of a value, use /*%if optional != null */
instead of /*%if optional.isPresent() */
.
The same is true for java.util.OptionalInt
, java.util.OptionalDouble
,
and java.util.OptionalLong
.
Literals¶
You can use the following literals:
Literal | Type |
---|---|
null | void |
true | boolean |
false | boolean |
10 | int |
10L | long |
0.123F | float |
0.123D | double |
0.123B | java.math.BigDecimal |
‘a’ | char |
“a” | java.lang.String |
The numeral types are distinguished by suffix letters such as L
or F
at the end of the literals. The suffixes must be capital letters.
select * from employee where
/*%if employeeName != null && employeeName.length() > 10 */
employee_name = /* employeeName */'smith'
/*%end*/
Comparison operators¶
You can use the following comparison operators:
Operator | Description |
---|---|
== | Equal to operator |
!= | Not equal to operator |
< | Less than operator |
<= | Less than or equal to operator |
> | Greater than operator |
>= | Greater than or equal to operator |
To use comparison operators, operands must implement java.lang.Comparable
.
The operands for <
, <=
, >
and >=
must not be null
.
select * from employee where
/*%if employeeName.indexOf("s") > -1 */
employee_name = /* employeeName */'smith'
/*%end*/
Logical operators¶
You can use the following logical operators:
Operator | Description |
---|---|
! | Logical complement operator |
&& | Conditional-AND operator |
|| | Conditional-OR operator |
With parentheses, you can override the precedence of operators.
select * from employee where
/*%if (departmentId == null || managerId == null) and employee_name != null */
employee_name = /* employeeName */'smith'
/*%end*/
Arithmetic operators¶
You can use the following arithmetic operators:
Operator | Description |
---|---|
+ | Additive operator |
- | Subtraction operator |
* | Multiplication operator |
/ | Division operator |
% | Remainder operator |
Operands must be numeric type.
select * from employee where
salary = /* salary + 1000 */0
String concatenation operator¶
You can concatenate characters using a concatenation operator +
.
The operand must be one of the following types:
- java.lang.String
- java.lang.Character
- char
select * from employee where
employee_name like /* employeeName + "_" */'smith'
Calling instance methods¶
You can call instance methods with the method names separated by dots .
.
The method visibility must be public.
select * from employee where
/*%if employeeName.startsWith("s") */
employee_name = /* employeeName */'smith'
/*%end*/
If the method has no argument, specify ()
after the method name.
select * from employee where
/*%if employeeName.length() > 10 */
employee_name = /* employeeName */'smith'
/*%end*/
Accessing to instance fields¶
You can access instance fields with the field names separated by dots .
.
Even if the visibility is private, you can access it.
select * from employee where
employee_name = /* employee.employeeName */'smith'
Calling static methods¶
You can call static methods by continuing the method names
with the fully qualified class names enclosed in @
.
The method visibility must be public.
select * from employee where
/*%if @java.util.regex.Pattern@matches("^[a-z]*$", employeeName) */
employee_name = /* employeeName */'smith'
/*%end*/
Accessing to static fields¶
You can access static fields by continuing the field name
with the fully qualified class name enclosed in @
.
Even if the visibility is private, you can access it.
select * from employee where
/*%if employeeName.length() < @java.lang.Byte@MAX_VALUE */
employee_name = /* employeeName */'smith'
/*%end*/
Using built-in functions¶
Built-in functions are utilities mainly for changing values of binding variables before binding them to SQL.
For example, when you run a prefix search with a LIKE clause, you can write like this:
select * from employee where
employee_name like /* @prefix(employee.employeeName) */'smith' escape '$'
@prefix(employee.employeeName)
means that we pass employee.employeeName
to the @prefix
function.
The @prefix
function converts the character sequence which is received by the parameter
to a string for forward match search.
It also escapes special characters.
For example, if the value of employee.employeeName
is ABC
, it’s converted to ABC%
.
If the value of employee.employeeName
contains %
such as AB%C
,
the %
is escaped with a default escape sequence $
,
therefore the value is converted to AB$%C%
.
You can use following function signatures:
- String @escape(CharSequence text, char escapeChar = ‘$’)
- Escapes the character sequence for LIKE operation.
The return value is a string which is a result of escaping the character sequence.
If
escapeChar
isn’t specified,$
is used as a default escape sequence. It returnsnull
if you passnull
as a parameter. - String @prefix(CharSequence prefix, char escapeChar = ‘$’)
- Converts the character sequence for prefix search.
The return value is a string which is a result of escaping the character sequence
and adding a wild card character at the end.
If
escapeChar
isn’t specified,$
is used as a default escape sequence. It returnsnull
if you passnull
as a parameter. - String @infix(CharSequence infix, char escapeChar = ‘$’)
- Converts the character sequence for infix search.
The return value is a string which is a result of escaping the character sequence
and adding wild card characters at the beginning and the end.
If
escapeChar
isn’t specified,$
is used as a default escape sequence. It returnsnull
if you passnull
as a parameter. - String @suffix(CharSequence suffix, char escapeChar = ‘$’)
- Converts the character sequence for suffix search.
The return value is a string which is a result of escaping the character sequence
and adding a wild card character at the beginning.
If
escapeChar
isn’t specified,$
is used as a default escape sequence. It returnsnull
if you passnull
as a parameter. - java.util.Date @roundDownTimePart(java.util.Date date)
- Rounds down the time part.
The return value is a new Date which is rounded down the time part.
It returns
null
if you passnull
as a parameter. - java.sql.Date @roundDownTimePart(java.sql.Date date)
- Rounds down the time part.
The return value is a new Date which is rounded down the time part.
It returns
null
if you passnull
as a parameter. - java.sql.Timestamp @roundDownTimePart(java.sql.Timestamp timestamp)
- Rounds down the time part.
The return value is a new Timestamp which is rounded down the time part.
It returns
null
if you passnull
as a parameter. - java.util.Date @roundUpTimePart(java.util.Date date)
- Rounds up the time part.
The return value is a new Date which is rounded up the time part.
It returns
null
if you passnull
as a parameter. - java.sql.Date @roundUpTimePart(java.sql.Date date)
- Rounds up the time part.
The return value is a new Date which is rounded up the time part.
It returns
null
if you passnull
as a parameter. - java.sql.Timestamp @roundUpTimePart(java.sql.Timestamp timestamp)
- Rounds up the time part.
The return value is a new Timestamp which is rounded up the time part.
It returns
null
if you passnull
as a parameter. - boolean @isEmpty(CharSequence charSequence)
- Returns
true
if the character sequence isnull
or the length is0
. - boolean @isNotEmpty(CharSequence charSequence)
- Returns
true
if the character sequence isn’tnull
and the length isn’t0
. - boolean @isBlank(CharSequence charSequence)
- Returns
true
only if the character sequence isnull
, the length is0
, or the sequence is formed with whitespaces only. - boolean @isNotBlank(CharSequence charSequence)
- Returns
true
if the character sequence isn’tnull
, the length isn’t0
, and the sequence isn’t formed with whitespaces only.
These functions are correspond to the methods of org.seasar.doma.expr.ExpressionFunctions
.
Using custom functions¶
You can define and use your own functions.
You need to follow these settings when you use custom functions which you define by yourself:
- The function is defined as a method of a class which implements
org.seasar.doma.expr.ExpressionFunctions
. - The method is a public instance method.
- The class is registered as an option in Annotation processing.
The key of the option is
doma.expr.functions
. - The instance of the class you create is used in an RDBMS dialect in your configuration class
(The implementations of RDBMS dialect provided by Doma can receive
ExpressionFunctions
in the constructor).
To call a custom function, add @
at the beginning of the function name like built-in functions.
For example, you can call myfunc
function like this:
select * from employee where
employee_name = /* @myfunc(employee.employeeName) */'smith'
Transaction¶
Doma supports local transaction. This document explains how to configure and use the local transaction.
If you want to use global transaction, use frameworks or application servers which support JTA (Java Transaction API).
Configuration¶
To use local transaction, these conditions are required:
- Return
LocalTransactionDataSource
fromgetDataSource
inConfig
- Generate
LocalTransactionManager
using theLocalTransactionDataSource
above in the constructor - Use the
LocalTransactionManager
above to control database access
There are several ways to generate and get the LocalTransactionManager
,
but the simplest way is to generate it in the constructor of Config
implementaion class
and make the Config
implementaiton class singleton.
Here is an example:
@SingletonConfig
public class AppConfig implements Config {
private static final AppConfig CONFIG = new AppConfig();
private final Dialect dialect;
private final LocalTransactionDataSource dataSource;
private final TransactionManager transactionManager;
private AppConfig() {
dialect = new H2Dialect();
dataSource = new LocalTransactionDataSource(
"jdbc:h2:mem:tutorial;DB_CLOSE_DELAY=-1", "sa", null);
transactionManager = new LocalTransactionManager(
dataSource.getLocalTransaction(getJdbcLogger()));
}
@Override
public Dialect getDialect() {
return dialect;
}
@Override
public DataSource getDataSource() {
return dataSource;
}
@Override
public TransactionManager getTransactionManager() {
return transactionManager;
}
public static AppConfig singleton() {
return CONFIG;
}
}
Note
The @SingletonConfig
shows that this class is a singleton class.
Usage¶
Let’s see examples on the condition that we use the following Dao interface annotated with
the AppConfig
class which we saw in the Configuration.
@Dao(config = AppConfig.class)
public interface EmployeeDao {
...
}
The dao
used in the code examples below are instances of this class.
Start and finish transactions¶
You can start a transaction with one of following methods of TransactionManager
:
- required
- requiresNew
- notSupported
Use a lambda expression to write a process which you want to run in a transaction.
TransactionManager tm = AppConfig.singleton().getTransactionManager();
tm.required(() -> {
Employee employee = dao.selectById(1);
employee.setName("hoge");
employee.setJobType(JobType.PRESIDENT);
dao.update(employee);
});
The transaction is committed if the lambda expression finishes successfully. The transaction is rolled back if the lambda expression throws an exception.
Explicit rollback¶
Besides throwing an exception, you can use setRollbackOnly
method to rollback a transaction.
TransactionManager tm = AppConfig.singleton().getTransactionManager();
tm.required(() -> {
Employee employee = dao.selectById(1);
employee.setName("hoge");
employee.setJobType(JobType.PRESIDENT);
dao.update(employee);
// Mark as rollback
tm.setRollbackOnly();
});
Savepoint¶
With a savepoint, you can cancel specific changes in a transaction.
TransactionManager tm = AppConfig.singleton().getTransactionManager();
tm.required(() -> {
// Search and update
Employee employee = dao.selectById(1);
employee.setName("hoge");
dao.update(employee);
// Create a savepoint
tm.setSavepoint("beforeDelete");
// Delete
dao.delete(employee);
// Rollback to the savepoint (cancel the deletion above)
tm.rollback("beforeDelete");
});
Building an application¶
Contents
Before You Start¶
Maven Central Repository¶
You can pull the artifacts of the Doma framework from the Maven central repository.
We provide two artifacts, doma-core
and doma-processor
.
The doma-core
artifact is required at runtime and compile-time.
The doma-processor
artifact provides annotation processors and is required at compile-time only.
The group id and artifact id of those artifacts are as follows:
GroupId: | org.seasar.doma |
---|---|
ArtifactId: | doma-core |
GroupId: | org.seasar.doma |
ArtifactId: | doma-processor |
Build with Gradle¶
Write your build.gradle as follows:
dependencies {
implementation "org.seasar.doma:doma-core:2.35.0"
annotationProcessor "org.seasar.doma:doma-processor:2.35.0"
}
To simplify your build.script, we recommend that you use the Doma Compile Plugin.
See build.gradle in the simple-boilerplate repository as an example.
Build with IntelliJ IDEA¶
Use a newer version of IntelliJ IDEA, and then import your project as a Gradle project. To know how to import project, see Import template project.
Build with Eclipse¶
Generate eclipse setting files with Gradle, and then import your project into Eclipse.
To generate the setting files, run gradle eclipse
.
To simplify your build.script, we recommend that you use the Doma Compile Plugin and the AptEclipsePlugin.
See build.gradle in the simple-boilerplate repository as an example.
Annotation processing¶
Contents
Doma uses Pluggable Annotation Processing API at compile time.
In this document, we describe the options for the annotation processors in Doma and show you how to pass them to build tools.
Options¶
- doma.dao.package
- The package that the generated implementation classes of interfaces annotated with
@Dao
belong to. The specified value overrides the value of doma.dao.subpackage. The default value is the same package as the one the interfaces annotated with@Dao
belong to. - doma.dao.subpackage
- The subpackage that the generated implementation classes of interfaces annotated with
@Dao
belong to. The specified value is overridden by the value of doma.dao.package. If this value isimpl
and the package of interfaces annotated with@Dao
isexample.dao
, the generated implementation classes belong to the packageexample.dao.impl
. - doma.dao.suffix
- The name suffix that the generated implementation classes of interfaces annotated with
@Dao
have. If this value isBean
and the simple name of the interface annotated with@Dao
isEmployeeDao
, the simple name of the generated implementation class isEmployeeDaoBean
. The default value isImpl
. - doma.debug
- Whether to output the debug log in annotation processing.
If the value is
true
, the annotation processors output the debug log. The default value isfalse
. - doma.domain.converters
- The full qualified names of the classes annotated with
@DomainConverters
. The names are described as comma separated list. This value are used to find external domain classes. - doma.entity.field.prefix
- The name prefix that the fields of the generated entity meta classes have.
The value
none
means the prefix is not used. The default value is$
. - doma.expr.functions
- The full qualified name of the class that implements
org.seasar.doma.expr.ExpressionFunctions
. The default value isorg.seasar.doma.expr.ExpressionFunctions
. This value are used to determine which functions are available in expression comments. - doma.metamodel.enabled
- Whether to generate meta classes for the Criteria API.
When the value is
true
, metamodels are generated for all entity classes even if they are not specified withmetamodel = @Metamodel
. The default value isfalse
. - doma.metamodel.prefix
- The name prefix of the metamodel classes for the Criteria API. The default value is an empty string.
- doma.metamodel.suffix
- The name suffix of the metamodel classes for the Criteria API.
The default value is
_
. - doma.resources.dir
- The resource directory that contains the resource files such as a doma.compile.config file and sql files. Specify the value as an absolute path. If the value is not specified, the resource directory is same as the directory the classes are generated.
- doma.sql.validation
- Whether to validate the existence of sql files and the grammar of sql comments.
If the value is
true
, the validations run. To disable the validations, setfalse
. The default value istrue
. - doma.version.validation
- Whether to validate the versions of doma.jar between runtime and compile-time.
If the value is
true
, the validation runs. To disable the validation, setfalse
. The default value istrue
. - doma.config.path
- The file path of the configuration file for Doma.
The default value is
doma.compile.config
.
Setting options in Gradle¶
Use the compilerArgs property:
compileJava {
options {
compilerArgs = ['-Adoma.dao.subpackage=impl', '-Adoma.dao.suffix=Impl']
}
}
Setting options in IntelliJ IDEA¶
Import your project as a Gradle project. In the case, the options written in build.gradle are used.
Setting options in Eclipse¶
Use the Gradle plugin com.diffplug.eclipse.apt and the processorArgs property:
plugins {
id 'com.diffplug.eclipse.apt' version '3.22.0'
}
compileJava {
aptOptions {
processorArgs = [
'doma.dao.subpackage' : 'impl', 'doma.dao.suffix' : 'Impl'
]
}
}
When you run gradle eclipse
, eclipse setting files are generated.
Setting options with configuration file¶
The options specified in the doma.compile.config
file are available in all build tools
such as Eclipse, IDEA, Gradle and so on.
The doma.compile.config
file must follow the properties file format
and be placed in the root directory such as src/main/resources
.
Note
The options specified in the doma.compile.config
file are overridden by
the ones specific to the build tools.
Lombok support¶
Contents
Doma supports Lombok 1.16.12 or above.
Note
If you intend to use Eclipse, use version 4.5 or above.
Overview¶
Both Lombok and Doma provide annotation processors. Because the execution order of annotation processors is not determined in Java, Doma’s annotation processors are not always aware of the class modifications made by Lombok annotation processors.
To resolve the issue, Doma’s annotation processors recognize several of Lombok’s annotations
and change their behavior.
For example, if Doma’s annotation processors find a class annotated with @lombok.Value
,
they suppose that the class has a constructor whose arguments cover all instance fields.
Best practices¶
We show you recommended ways to define classes with Lombok annotations.
Entity class definition¶
immutable entity classes¶
- Specify
true
to theimmutable
element of@Entity
- Specify either
@lombok.Value
or@lombok.AllArgsConstructor
- Specify
@lombok.Getter
to generate getters, in case of@lombok.AllArgsConstructor
@Entity(immutable = true)
@Value
public class Employee {
@Id
Integer id;
String name;
Age age;
}
@Entity(immutable = true)
@AllArgsConstructor
@Getter
public class Worker {
@Id
private final Integer id;
private final String name;
private final Age age;
}
mutable entity classes¶
- Define a default constructor
- Specify
@lombok.Data
or@lombok.Getter
/@lombok.Setter
to generate getters/setters
@Entity
@Data
public class Person {
@Id
private Integer id;
private String name;
private Age age;
}
@Entity
@Getter
@Setter
public class Businessman {
@Id
private Integer id;
private String name;
private Age age;
}
Domain class definition¶
- Specify
@lombok.Value
- Define only one instance field whose name is
value
@Domain(valueType = Integer.class)
@Value
public class Age {
Integer value;
}
Embeddable class definition¶
- Specify either
@lombok.Value
or@lombok.AllArgsConstructor
- Specify
@lombok.Getter
to generate getters, in case of@lombok.AllArgsConstructor
@Embeddable
@Value
public class Address {
String street;
String city;
}
@Embeddable
@AllArgsConstructor
@Getter
public class Location {
private final String street;
private final String city;
}
Kotlin support¶
Contents
Doma supports Kotlin 1.3.11 or above experimentally.
Best practices¶
We show you recommended ways to define classes and build them with Kotlin.
Entity classes¶
- Define as a data class
- Specify
true
to theimmutable
element of@Entity
- Define only one constructor
- Define properties only in the constructor
- Use val for the property definitions
@Entity(immutable = true)
data class Person(
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
val id: Int? = null,
val name: Name,
val address: Address)
Domain classes¶
- Define as a data class
- Define only one constructor
- Define only one property whose name is
value
in the constructor - Use val for the property definition
@Domain(valueType = String::class)
data class Name(val value: String)
Embeddable classes¶
- Define as a data class
- Define only one constructor
- Define properties only in the constructor
- Use val for the property definitions
@Embeddable
data class Address(val city: String, val street: String)
Dao interfaces¶
- Specify a SQL template to
@org.seasar.doma.Sql
- Use
org.seasar.doma.jdbc.Result
as the return type of@Delete
,@Insert
and@Update
- Use
org.seasar.doma.jdbc.BatchResult
as the return type of@BatchDelete
,@BatchInsert
and@BatchUpdate
@Dao(config = AppConfig::class)
interface PersonDao {
@Sql("""
select * from person where id = /*id*/0
""")
@Select
fun selectById(id: Int): Person
@Insert
fun insert(Person person): Result<Person>
}
- Use Destructuring Declarations
for
org.seasar.doma.jdbc.Result
andorg.seasar.doma.jdbc.BatchResult
val dao: PersonDao = ...
val person = Person(name = Name("Jhon"), address = Address(city = "Tokyo", street = "Yaesu"))
val (newPerson, count) = dao.insert(person)
Using kapt in Gradle¶
Annotation processors are supported in Kotlin with the kapt compiler plugin.
Add the dependencies using the kapt and implementation configuration in your dependencies block:
dependencies {
implementation "org.seasar.doma:doma-core:2.35.0"
kapt "org.seasar.doma:doma-processor:2.35.0"
}
To simplify your build.script, we recommend you use the Doma Compile Plugin:
plugins {
id 'org.seasar.doma.compile' version '1.0.0'
}
For more details, see this build.gradle.
Note
Remember that you always have options as follows:
- Write all code in Kotlin
- Write all code in Java
- Write code annotated with Doma’s annotations in Java and others in Kotlin
The third option is worth considering, because it can avoid some troubles with the kapt.
Developer Documentation¶
Development of Doma¶
Contents
Note
This document is written for the developers of Doma.
Before You Start¶
To build you will need Git and JDK 8.
Be sure that your JAVA_HOME
environment variable points to the jdk1.8.0
folder extracted from the JDK download.
Get the Source Code¶
$ git clone https://github.com/domaframework/doma.git
$ cd doma
Build from the Command Line¶
$ ./gradlew build
Format the Source Code¶
We use google-java-format 1.7 for code formatting.
Continuous Integration¶
We use Travis CI for CI. All pull requests to master brunch are tested on Travis CI.
Documents¶
We use Sphinx to generate documents. To use Sphinx you will need Python.
Set up an environment¶
$ cd docs
$ pip install -r requirements.txt
Generate HTML files¶
Execute the sphinx-autobuild command in the docs
directory:
$ sphinx-autobuild . _build/html
Visit the webpage served at http://127.0.0.1:8000.
About Doma¶
Release notes¶
v2.35.0: 2020-05-17¶
v2.34.0: 2020-05-10¶
See also Criteria API and simple-examples.
v2.33.0: 2020-05-06¶
See also Criteria API and simple-examples.
v2.31.0: 2020-04-29¶
- GH388 Enhance the CommentContext class to accept a message
- GH387 Support the Sql annotation officially
- GH386 Add the “getQuery” method to the “Command” interface
- GH384 Resolve type parameters with actual type arguments for generic types
- GH382 Add the Criteria API
- GH381 Replace version before build
- GH380 Allow uncommitted files before release
- GH379 Don’t replace the version with the snapshot version in documents
- GH378 Allow no-args default method in Kotlin
v2.30.0: 2020-04-18¶
See also Migration Guide.
v2.29.0: 2020-04-05¶
v2.28.0: 2020-03-18¶
v2.27.0: 2020-01-25¶
v2.26.0: 2019-12-29¶
v2.25.0: 2019-08-25¶
- GH322 Describe settings for IntelliJ IDEA Community Edition 2019.2.1
- GH321 Support domain mapping using ExternalDomain of Java Array
- GH314 Upgrade google-java-format to 1.7
- GH313 Upgrade spotless-plugin-gradle to 3.20.0
- GH312 Clarify support for Java 12
- GH311 Introduce wrapper classes dedicated to primitive types
- GH310 Upgrade JUnit to 5.4.0
v2.24.0: 2019-02-23¶
- GH308 Implement toString methods to make debug easy
- GH307 Polish the org.seasar.doma.internal package
- GH306 Remove Japanese comments
- GH305 Register the option “doma.config.path” to suppress a warning message
- GH304 Simplify the DefaultPropertyType class
- GH303 Simplify generators
- GH301 Fix compile error in procedure and function methods
- GH300 Support bound type parameters in domain classes
- GH299 Migrate from JUnit 3 to JUnit 5
- GH298 Polish build script
v2.23.0: 2019-02-03¶
- GH294 Support array types for bind and literal variable directives
- GH276 Support array types for loop directives
- GH286 Support automatic removal of unnecessary ORDER BY and GROUP BY clauses
- GH295 Polish messages
- GH284 Preserve the auto-commit state of a JDBC connection
- GH283 Run tests in each annotation processor instance
- GH296 GH293 GH292 GH291 GH290 GH289 GH288 GH287 GH285 GH282 Polish documents
v2.22.0: 2019-01-20¶
v2.21.0: 2019-01-06¶
- GH267 Change the description of Doma
- GH266 Remove descriptions of Doma 3
- GH265 Polish annotation processing
- GH264 Improve build script example
- GH263 Translate resource bundle messages from Japanese into English
- GH262 Translate javadoc comments from Japanese into English
- GH261 Change the sphinx langulage option
- GH260 Translate development.rst
- GH259 Format with google-java-format 1.6
- GH258 Translate docs from Japanese into English
- GH257 Use Sphinx 1.8.2
v2.20.0: 2018-12-23¶
- GH255 Support Gradle incremental annotation processing
- GH254 Specify supported Java versions
- GH253 Explain how to write gradle build script in Gradle 5.0
- GH252 Polish Gradle configuration
- GH251 Use Gradle 5.0
- GH248 Fix mistake in abbreviation for JRE
- GH247 Load a class with Class.forName when the context class loader fails to load the class
- GH245 Revise wrong descriptions about @Update.excludeNull, @BatchUpdate.include and @BatchUpdate.exclude
v2.19.3: 2018-09-02¶
v2.19.1: 2018-01-08¶
v2.19.0: 2017-11-19¶
v2.18.0: 2017-10-28¶
v2.17.0: 2017-09-09¶
v2.16.1: 2017-05-14¶
v2.15.0: 2017-02-05¶
v2.14.0: 2017-01-14¶
- GH183 BatchUpdateExecutor, BatchDeleteExecutor, MapBatchInsertBuilder を追加
- GH182 エンベッダブルクラスにプロパティを定義しない場合に生成されるコードがコンパイルエラーになっていたのを修正
- GH181 SQLテンプレートで組み立てられたSQLを扱うための @SqlProcessor を追加
- GH180 Lombok をサポート
- GH179 StandardExpressionFunctions#escapeがescapeCharを使用していない
- GH177 Kotlin 1.0.6対応
- GH176 BatchInsertExecutorを追加
- GH175 組み込み関数の LocalDate, LocalDateTime 対応
- GH174 Mapをパラメータとして自動でInsert文を組み立てるMapInsertBuilderを追加
v2.12.0: 2016-07-14¶
v2.11.0: 2016-06-18¶
v2.10.0: 2016-05-28¶
- GH146 Embeddable なオブジェクトが null の場合に更新系の処理が失敗する不具合を修正
- GH145 Kotlin のサポートについてドキュメントを追加
- GH142 エンベッダブルクラスのドキュメントを追加
- GH141 エンティティクラスが継承をしている場合の親プロパティへのアクセス方法を簡易化
- GH140 プリミティブ型のプロパティにnullがアサインされる場合に例外が発生していた不具合をデフォルト値が設定されるように修正
- GH139 @Embeddable をサポート
- GH138 Kotlin でイミュータブルなエンティティを定義する際 @ParameterName を使用する必要性を除去
v2.9.0: 2016-05-16¶
v2.7.0: 2016-02-27¶
v2.6.2: 2016-02-11¶
v2.6.0: 2015-11-21¶
v2.4.0: 2015-08-14¶
- GH93 JdbcMappingHint#getDomainClass() がドメインクラスを返さない問題を修正
- GH89 PortableObjectTypeをジェネリクスにして、String等をvalueTypeとして指定できるように
- GH88 JdbcLoggerのメソッドのtypoを修正。 Failuer -> Failure
- GH87 StandardExpressionFunctionsのサブクラスにpublicなコンストラクタを追加
- GH86 Version number spec is different from the document
- GH84 populate を使ったメソッドで DOMA4122 が出る問題を修正
- GH81 リソースバンドルが取得できない場合はデフォルトのリソースバンドルにフォールバックする
v2.2.0: 2015-03-28¶
- GH71 インターフェースにも@Domainで注釈できるようにしました
- GH70 EntityListenerの取得はEntityListenerProviderを介するようにしました
- GH67 SQL Server の OPTION 句が存在するとページングが正しく実行されない問題を修正しました
- GH66 ネーミング規約の適用をコンパイル時から実行時に変更
- GH64 イミュータブルなエンティティの取得でNullPointerException が発生するバグを修正しました
- GH61 SQL Server 2012 から追加された OFFSET-FETCH をページング処理に使う
- GH60 Mssql2008Dialect の getName() が返す値を変更しました
- GH59 Windows環境でテストが失敗する問題を修正
- GH58 StringUtilのfromCamelCaseToSnakeCaseで、カラム名に数字が含まれている場合意図している結果にならない
v2.1.0: 2014-12-30¶
v2.0.1: 2014-08-06¶
DomainConverter
の第2型引数にbyte[]
を指定すると注釈処理でコンパイル エラーになる問題を修正しました
v2.0.0: 2014-07-02¶
UnitOfWork
を削除しました
v2.0-beta-5: 2014-06-07¶
List<Optional<Emp>>
やList<Optional<Map<String, Object>>>
を戻り値とする Dao メソッドは注釈処理でコンパイルエラーにしました- Entity 更新後に OriginalStates へ変更が反映されない問題を修正しました
- エンティティの識別子の値がすでに設定されている場合は自動生成処理を実行しないようにしました
- カラムリスト展開コメント で DOMA4257 エラーになる問題を修正しました
- SQLのログ出力方法をアノテーションで制御できるようにしました
- Dao から出力されるログのメッセージを詳細化しました
UtilLoggingJdbcLogger
のロガーの名前をクラスの完全修飾名に変更しました- SQL実行時にSQLファイルのパスがログに出力されない問題を修正しました
v2.0-beta-4: 2014-05-04¶
- Pluggable Annotation Processing API の Visitor を Java 8 用のものへバージョンアップしました
- 空の
java.util.Iterable
を IN 句にバインドする場合は SQL のnull
として扱うようにしました java.sql.SQLXML
に対応しましたLocalTransaction
で指定したセーブポイント「以降」を削除すべき箇所で「以前」を削除している不具合を修正しましたLocalTransaction
でセーブポイント削除時のログが間違っている不具合を修正しました- Entity のプロパティの型を byte 配列にすると注釈処理に失敗する不具合を修正しました
v2.0-beta-3: 2014-04-03¶
- 検索結果を
java.util.stream.Collector
で処理できるようにしました。 LocalTransactionManager
からTransactionManager
インタフェースを抽出しました。Config
で指定した設定が一部無視される不具合を修正しました。- マップのネーミング規約を一律制御するためのインタフェース
MapKeyNaming
を追加しました。 java.time.LocalDate
、java.time.LocalTime
、java.time.LocalDateTime
を基本型として使用できるようにしました。JdbcLogger
の実装の差し替えを容易にするためにAbstractJdbcLogger
を追加しました。SelectStrategyType
の名前をSelectType
に変更しました。
Frequently Asked Questions¶
Contents
- Frequently Asked Questions
- General
- Runtime environment
- Development environment
- Which version of JDK does Doma support?
- Which IDE do you recommend?
- In Eclipse, the jar file of Doma is added to the Java Build Path but annotation processing doesn’t run.
- Where are generated source files in annotation processing?
- In Eclipse, where is the .apt_generated directory?
- I get the message that the sql file is not found, but it exists.
- Do you provide any code generation tools?
- Features as a database access library
General¶
What does “Doma” mean?¶
The name “Doma” comes from the “Dao Oriented database MApping framework”.
What is annotation processing?¶
Annotation processing, that was introduced in Java 6, allows us to validate and generate source code at compile time.
We use annotation processing for the following purposes:
- Generating meta classes from the classes annotated with
@Entity
and@Domain
. - Generating implementation classes of the interfaces annotated with
@Dao
. - Validating SQL templates.
Runtime environment¶
Which version of JRE does Doma support?¶
JRE 8 and above.
Development environment¶
Which version of JDK does Doma support?¶
JDK 8 and above.
Which IDE do you recommend?¶
We recommend Eclipse and IntelliJ IDEA.
In Eclipse, the jar file of Doma is added to the Java Build Path but annotation processing doesn’t run.¶
Enable annotation processing and add the jar file to the Factory Path too. See also Build with Eclipse.
Where are generated source files in annotation processing?¶
In Eclipse, they are found in the .apt_generated directory.
In Eclipse, where is the .apt_generated directory?¶
You can find it in the Navigator view.
I get the message that the sql file is not found, but it exists.¶
You may get the following message, though the file exists:
[DOMA4019] The file[META-INF/../select.sql] is not found from the classpath
When you use Eclipse, check that the location of the output folder of resources is same as the one for the class files in the Java Build Path dialog. You can avoid this trouble by generating eclipse setting files automatically. See also Build with Eclipse.
When you use Gradle, check that the resource files are copied to compileJava.destinationDir
in advance of the compileJava task.
You can avoid this trouble by using the Doma Compile Plugin.
See also Build with Gradle.
Do you provide any code generation tools?¶
Yes. We have the Doma CodeGen plugin that generates Java and SQL files from Database.
Features as a database access library¶
Does Doma generate SQL statements?¶
Yes, Doma generates the following statements:
- SELECT
- INSERT
- DELETE
- UPDATE
- Stored procedure call
- Stored function call
How are dynamic SQL statements executed?¶
There are two ways:
- The SQL Templates.
- The Criteria API.
See SQL templates and Criteria API for detail information.
Does Doma map database relationships such as one-to-one and one-to-many to Java objects?¶
Yes.
Doma provides the Criteria API to map database relationships to Java entities.
See associate (Entityql) for detail information.
Does Doma provide a JDBC connection pooling feature?¶
No.
Use Doma together with a JDBC connection pool library such as HikariCP.