_images/doma.png

Welcome to Doma 2

Doma 2 is a database access framework for Java 8+. Doma has various strengths:

  • Verifies and generates source code at compile time using annotation processing.
  • Maps database columns to user-defined Java objects.
  • Uses SQL templates, called “two-way SQL”.
  • Supports classes introduced in Java 8, such as java.time.LocalDate, java.util.Optional, and java.util.stream.Stream.
  • Has no dependence on other libraries.

This document consists of following sections:

Help improve our documentation

Have feedback or a question? Found a typo?

Please add an issue or pull request to https://github.com/domaframework/doma and we’ll get back to you.

User Documentation

Get started!(Eclipse)

Summary

Introduce how to setting up development environment and how to executing basic database access.

Install JDK

You install JDK 8 .

Note

Doma supports JDK 8 and later. See also Which version of JDK does Doma support?.

Install Eclipse

You install Eclipse .

Note

Running on Eclipse IDE for Java EE Developers and so on other but checking of running by Eclipse Standard 4.4 in this document. It seems to that it is running at higher version.

Install Doma Tools that is Eclipse plugin

Doma tool is plugin that enable mutual transition between Java file and SQL file. This plugin is not required to using Doma, but if you use this plugin then productivity is growth.

You select Help > Install New Software… from menu bar and input next url to ‘Work With’ textbox.

http://dl.bintray.com/domaframework/eclipse/

Install enable plugin candidate is shown like below then you check to Doma tool latest version and go on dialog to finish installing.

_images/install-doma-tools.png

Associate to file

Doma tools execute annotation processing by hook the file updating. In order to do that , you need to open SQL file in Eclipse.

You select Eclipse > Preference… or Window > Preference from menu bar and open preference dialog.

You associate file that has .sql extensions to Text Editor like shown below figure.

_images/sql-file-association.png

Similarly you associate file that has .script extensions to Text Editor.

_images/script-file-association.png

Note

You can skip this setting if you use Eclipse IDE for Java EE Developers because SQL file is associated to specialized editor by default.

Note

We recommend to you development style that you create SQL by RDBMS specific tools (Oracle SQL Developer and pgAdmin) and copy accomplished SQL to Eclipse editor.

Import template project

You clone simple-boilerplate from GitHub.

$ git clone https://github.com/domaframework/simple-boilerplate.git

Move to the cloned directory.

$ cd simple-boilerplate

Create config file for Eclipse by next command.

$ ./gradlew eclipse

Note

You input gradlew eclipse instead of ./gradlew eclipse in Windows environment.

Note

Please set JDK 8 (or JDK 9 and 10) installed directory to environment variable JAVA_HOME. It is needed for executing gradlew.

Note

The config that is for annotation processing config is included in Eclipse config file. Reference Build with Eclipse if configure by manual.

You select File > Import… from Eclipse menu bar and select ‘Existing Projects into Workspace’ and import simple-boilerplate.

_images/import.png

You select project and execute JUnit for confirming the accomplished the importing. If one test case is success then importing was finished normally.

Structure of template project

The project source code’s structure is like next.

─ src
  ├── main
  │   ├── java
  │   │   └── boilerplate
  │   │       ├── AppConfig.java
  │   │       ├── dao
  │   │       │   ├── AppDao.java
  │   │       │   └── EmployeeDao.java
  │   │       └── entity
  │   │           └── Employee.java
  │   └── resources
  │       └── META-INF
  │           └── boilerplate
  │               └── dao
  │                   ├── AppDao
  │                   │   ├── create.script
  │                   │   └── drop.script
  │                   └── EmployeeDao
  │                       ├── selectAll.sql
  │                       └── selectById.sql
  └── test
      ├── java
      │   └── boilerplate
      │       ├── DbResource.java
      │       └── dao
      │           └── EmployeeDaoTest.java
      └── resources

Explain about important file.

AppConfig.java
The Configuration that is needed for executing Doma.
AppDao.java
Utility that create/drop the database schema that is using in this application. This is not need in production environment. The script file is under META-INF/boilerplate/dao/AppDao/ and is used for creating and dropping schema.
Employee.java
The Entity classes that correspond to EMPLOYEE table within database.
EmployeeDao.java
The Dao interfaces that is execute getting and updating Employee class. The SQL file is under META-INF/boilerplate/dao/EmployeeDao/ and is used.
EmployeeDaoTest.java
The test that is using EmployeeDao. You can learn about Doma by adding test case to this file. Other test is not affected by updating data because database schema is created and disposed per test method.

Mutual transition between Java file and SQL file

EmployeeDao.java is defined like next.

@Dao(config = AppConfig.class)
public interface EmployeeDao {

    @Select
    List<Employee> selectAll();

    @Select
    Employee selectById(Integer id);

    @Insert
    int insert(Employee employee);

    @Update
    int update(Employee employee);

    @Delete
    int delete(Employee employee);

}

You move cursor to selectById method and do right click at Eclipse editor and show context menu. You can transition to META-INF/boilerplate/dao/EmployeeDao/selectById.sql file by selecting Doma > Jum to SQL in menu.

Next, you put cursor to arbitrary place in META-INF/boilerplate/dao/EmployeeDao/selectById.sql file and show context menu. You can back to EmployeeDao.java file by selecting Doma > Jump to Java in menu.

SQL File

You open META-INF/boilerplate/dao/EmployeeDao/selectById.sql file. This file is described like next.

select
    /*%expand*/*
from
    employee
where
    id = /* id */0

The /*%expand*/ show that expansioning column list by referencing entity class that is mapped at Java method.

The /* id */ show that Java method parameter value is binding to this SQL.

The 0 that is placed at behind is test data. By including this test data, you can confirm easily that there is not mistake in SQL at executing by tool. Test data is not used at executing Java program.

About detail you reference SQL templates.

Insert

For executing Insert process, you call Dao method that is annotated @Insert annotation.

Execute insert process

You confirm that next code is exists at EmployeeDao.

@Insert
int insert(Employee employee);

Execute insert process by using this code.

You add next code to EmployeeDaoTest.

@Test
public void testInsert() {
    TransactionManager tm = AppConfig.singleton().getTransactionManager();

    Employee employee = new Employee();

    // First transaction
    // Execute inserting
    tm.required(() -> {
        employee.name = "HOGE";
        employee.age = 20;
        dao.insert(employee);
        assertNotNull(employee.id);
    });

    // Second transaction
    // Confirm that inserting is success
    tm.required(() -> {
        Employee employee2 = dao.selectById(employee.id);
        assertEquals("HOGE", employee2.name);
        assertEquals(Integer.valueOf(20), employee2.age);
        assertEquals(Integer.valueOf(1), employee2.version);
    });
}

You execute JUnit and confirm that this code is run.

At that time, created for the inserting SQL is next.

insert into Employee (age, id, name, version) values (20, 100, 'HOGE', 1)

Identifier and version number is automatically setting.

Update

For executing Update process, you call Dao method that is annotated @Update annotation.

Execute update process

You confirm that next code is exists at EmployeeDao.

@Update
int update(Employee employee);

Execute update process by using this code.

You add next code to EmployeeDaoTest.

@Test
public void testUpdate() {
    TransactionManager tm = AppConfig.singleton().getTransactionManager();

    // First transaction
    // Search and update age field
    tm.required(() -> {
        Employee employee = dao.selectById(1);
        assertEquals("ALLEN", employee.name);
        assertEquals(Integer.valueOf(30), employee.age);
        assertEquals(Integer.valueOf(0), employee.version);
        employee.age = 50;
        dao.update(employee);
        assertEquals(Integer.valueOf(1), employee.version);
    });

    // Second transaction
    // Confirm that updating is success
    tm.required(() -> {
        Employee employee = dao.selectById(1);
        assertEquals("ALLEN", employee.name);
        assertEquals(Integer.valueOf(50), employee.age);
        assertEquals(Integer.valueOf(1), employee.version);
    });
}

You execute JUnit and confirm that this code is run.

At that time, created for the updating SQL is next.

update Employee set age = 50, name = 'ALLEN', version = 0 + 1 where id = 1 and version = 0

The version number that is for optimistic concurrency control is automatically increment.

Delete

For executing Delete process, you call Dao method that is annotated @Delete annotation.

Execute delete process

You confirm that next code is exists at EmployeeDao.

@Delete
int delete(Employee employee);

Execute delete process by using this code.

You add next code to EmployeeDaoTest.

@Test
public void testDelete() {
    TransactionManager tm = AppConfig.singleton().getTransactionManager();

    // First transaction
    // Execute deleting
    tm.required(() -> {
        Employee employee = dao.selectById(1);
        dao.delete(employee);
    });

    // Second transaction
    // Confirm that deleting is success
    tm.required(() -> {
        Employee employee = dao.selectById(1);
        assertNull(employee);
    });
}

You execute JUnit and confirm that this code is run.

At that time, created for the deleting SQL is next.

delete from Employee where id = 1 and version = 0

Identifier and version number is specified in search condition.

Get started!(IntelliJ IDEA)

Summary

We introduce how to set up development environment and how to execute basic database access.

Install JDK

Install JDK 8 .

Note

Doma supports JDK 8 and later. See also Which version of JDK does Doma support?.

Install IntelliJ IDEA

Install IntelliJ IDEA Community Edition .

Note

In this document, we use IntelliJ IDEA Community Edition 2019.2.1 as IDE. If you use IntelliJ IDEA Ultimate Edition, recommend to you concomitant use with IntelliJ Doma support plugin.

Import template project

Clone simple-boilerplate from GitHub:

$ git clone https://github.com/domaframework/simple-boilerplate.git

Start IntelliJ IDEA and execute Import Project, and select the cloned simple-boilerplate:

_images/idea-welcome.png

Check Import project from external model and select Gradle:

_images/idea-import-project.png

Push the Finish button. If the message that is synced successfully is shown at Build tool window then import is success.

Setting of Annotation Processor

Open Build, Execution, Deployment > Compiler > Annotation Processors from Preferences. Check Enable annotation processing:

_images/idea-annotation-processors.png

Setting of Build and Run

Open Build, Execution, Deployment > Build Tools > Gradle from Preferences. Choose IntelliJ IDEA for Run tests using:

_images/run-tests-using.png

Try to build and run test cases.

Structure of template project

The project source code’s structure is like next.

─ src
  ├── main
  │   ├── java
  │   │   └── boilerplate
  │   │       ├── AppConfig.java
  │   │       ├── dao
  │   │       │   ├── AppDao.java
  │   │       │   └── EmployeeDao.java
  │   │       └── entity
  │   │           └── Employee.java
  │   └── resources
  │       └── META-INF
  │           └── boilerplate
  │               └── dao
  │                   ├── AppDao
  │                   │   ├── create.script
  │                   │   └── drop.script
  │                   └── EmployeeDao
  │                       ├── selectAll.sql
  │                       └── selectById.sql
  └── test
      ├── java
      │   └── boilerplate
      │       ├── DbResource.java
      │       └── dao
      │           └── EmployeeDaoTest.java
      └── resources

Explain about important file.

AppConfig.java
The Configuration that is needed for executing Doma.
AppDao.java
Utility that create/drop the database schema that is using in this application. This is not need in production environment. The script file is under META-INF/boilerplate/dao/AppDao/ and is used for creating and dropping schema.
Employee.java
The Entity classes that correspond to EMPLOYEE table within database.
EmployeeDao.java
The Dao interfaces that is execute getting and updating Employee class. The SQL file is under META-INF/boilerplate/dao/EmployeeDao/ and is used.
EmployeeDaoTest.java
The test that is using EmployeeDao. You can learn about Doma by adding test case to this file. Other test is not affected by updating data because database schema is created and disposed per test method.

SQL file

You open META-INF/boilerplate/dao/EmployeeDao/selectById.sql file. This file is described like next.

select
    /*%expand*/*
from
    employee
where
    id = /* id */0

The /*%expand*/ show that expansioning column list by referencing entity class that is mapped at Java method.

The /* id */ show that Java method parameter value is binding to this SQL.

The 0 that is placed at behind is test data. By including this test data, you can confirm easily that there is not mistake in SQL at executing by tool. Test data is not used at executing Java program.

About detail you reference SQL templates.

Insert

For executing Insert process, you call Dao method that is annotated @Insert annotation.

Execute insert process

You confirm that next code is exists at EmployeeDao.

@Insert
int insert(Employee employee);

Execute insert process by using this code.

You add next code to EmployeeDaoTest.

@Test
public void testInsert() {
    TransactionManager tm = AppConfig.singleton().getTransactionManager();

    Employee employee = new Employee();

    // First transaction
    // Execute inserting
    tm.required(() -> {
        employee.name = "HOGE";
        employee.age = 20;
        dao.insert(employee);
        assertNotNull(employee.id);
    });

    // Second transaction
    // Confirm that inserting is success
    tm.required(() -> {
        Employee employee2 = dao.selectById(employee.id);
        assertEquals("HOGE", employee2.name);
        assertEquals(Integer.valueOf(20), employee2.age);
        assertEquals(Integer.valueOf(1), employee2.version);
    });
}

You execute JUnit and confirm that this code is run.

At that time, created for the inserting SQL is next.

insert into Employee (age, id, name, version) values (20, 100, 'HOGE', 1)

Identifier and version number is automatically setting.

Update

For executing Update process, you call Dao method that is annotated @Update annotation.

Execute update process

You confirm that next code is exists at EmployeeDao.

@Update
int update(Employee employee);

Execute update process by using this code.

You add next code to EmployeeDaoTest.

@Test
public void testUpdate() {
    TransactionManager tm = AppConfig.singleton().getTransactionManager();

    // First transaction
    // Search and update age field
    tm.required(() -> {
        Employee employee = dao.selectById(1);
        assertEquals("ALLEN", employee.name);
        assertEquals(Integer.valueOf(30), employee.age);
        assertEquals(Integer.valueOf(0), employee.version);
        employee.age = 50;
        dao.update(employee);
        assertEquals(Integer.valueOf(1), employee.version);
    });

    // Second transaction
    // Confirm that updating is success
    tm.required(() -> {
        Employee employee = dao.selectById(1);
        assertEquals("ALLEN", employee.name);
        assertEquals(Integer.valueOf(50), employee.age);
        assertEquals(Integer.valueOf(1), employee.version);
    });
}

You execute JUnit and confirm that this code is run.

At that time, created for the updating SQL is next.

update Employee set age = 50, name = 'ALLEN', version = 0 + 1 where id = 1 and version = 0

The version number that is for optimistic concurrency control is automatically increment.

Delete

For executing Delete process, you call Dao method that is annotated @Delete annotation.

Execute delete process

You confirm that next code is exists at EmployeeDao.

@Delete
int delete(Employee employee);

Execute delete process by using this code.

You add next code to EmployeeDaoTest.

@Test
public void testDelete() {
    TransactionManager tm = AppConfig.singleton().getTransactionManager();

    // First transaction
    // Execute deleting
    tm.required(() -> {
        Employee employee = dao.selectById(1);
        dao.delete(employee);
    });

    // Second transaction
    // Confirm that deleting is success
    tm.required(() -> {
        Employee employee = dao.selectById(1);
        assertNull(employee);
    });
}

You execute JUnit and confirm that this code is run.

At that time, created for the deleting SQL is next.

delete from Employee where id = 1 and version = 0

Identifier and version number is specified in search condition.

Configuration

The configurable items must be returned from the methods of the implementation class of the org.seasar.doma.jdbc.Confing interface.

Configurable items

DataSource

Return a JDBC DataSource from the getDataSource method. If you need local transactions provided by Doma, return a LocalTransactionDataSource.

See also: Transaction

Note

Required item

DataSource’s name

Return a DataSource’s name from the getDataSourceName method. In the environment where multiple DataSources are used, the name is important. You have to give an unique name to each DataSource.

The default value is the full qualified name of the implementation class of Config.

SQL dialect

Return a Dialect from the getDialect method. You have to choose an appropriate dialect for the database you use.

Doma provides following dialects:

Database Dialect Name Description
DB2 Db2Dialect  
H2 Database Engine 1.2.126 H212126Dialect H2 Database Engine 1.2.126
H2 Database H2Dialect H2 Database Engine 1.3.171 and above
HSQLDB HsqldbDialect  
Microsoft SQL Server 2008 Mssql2008Dialect Microsoft SQL Server 2008
Microsoft SQL Server MssqlDialect Microsoft SQL Server 2012 and above
MySQL MySqlDialect  
Oracle Database 11g Oracle11Dialect Oracle Database 11g
Oracle Database OracleDialect Oracle Database 12g and above
PostgreSQL PostgresDialect  
SQLite SqliteDialect  

These dialect are located in the org.seasar.doma.jdbc.dialect package.

Note

Required item

Logger

Return a JdbcLogger from the getJdbcLogger method.

Doma provides following JdbcLogger:

  • org.seasar.doma.jdbc.UtilLoggingJdbcLogger

The default JdbcLogger is UtilLoggingJdbcLogger which uses java.util.logging.

SQL File Repository

Return a SqlFileRepository from the getSqlFileRepository method.

Doma provides following SqlFileRepositories:

  • org.seasar.doma.jdbc.GreedyCacheSqlFileRepository
  • org.seasar.doma.jdbc.NoCacheSqlFileRepository

The default SqlFileRepository is GreedyCacheSqlFileRepository which caches the result of SQL parsing without limitation.

Controlling REQUIRES_NEW transaction

Return a RequiresNewController from the getRequiresNewController method. RequiresNewController may begin new transactions to makes transaction locked time shorter.

This feature is used only when you use @TableGenerator which generates identities with the table.

The default RequiresNewController does nothing.

Loading classes

Return a ClassHelper from the getClassHelper method.

When the application server and framework you use loads classes with their specific way, consider to create your own ClassHelper.

The default ClassHelper loads classes with Class#forName mainly.

Choosing SQL format contained in exception messages

Return a SqlLogType from the getExceptionSqlLogType. The default SqlLogType contains the formatted SQL in exception messages.

Handling unknown columns

Return a UnknownColumnHandler from the getUnknownColumnHandler method. In result set mappings, if an unknown column to an entity class is found, the UnknownColumnHandler handles the situation.

The default UnknownColumnHandler throws an UnknownColumnException.

Naming convention for tables and columns

Return a Naming from the getNaming method. The naming element of @Entity have preference over this value. When you specify explicit value to the name elements of @Table and @Column, the naming convention is not applied to them.

The default Naming does nothing.

Naming convention for keys of java.util.Map

Return a MapKeyNaming from the getMapKeyNaming method. The MapKeyNaming is used when the result set is mapped to java.util.Map<String, Object>.

The default MapKeyNaming does nothing.

Local transaction manager

Return a LocalTransactionManager from the getTransactionManager method. The getTransactionManager method throws UnsupportedOperationException as default.

See also: Transaction

Adding SQL identifiers to the SQLs as a comment

Return a Commenter from the getCommenter method.

Doma provides following commenter:

  • org.seasar.doma.jdbc.CallerCommenter

The default Commenter does nothing.

Command implementors

Return a CommandImplementors from the getCommandImplementors method. For example, the CommandImplementors provides you a hook to execute JDBC API.

Query implementors

Return a QueryImplementors from the getQueryImplementors method. For example, the QueryImplementors provides you a hook to rewrite SQL statements.

Query timeout

Return the query timeout (second) from the getQueryTimeout method. This value is used as default in Queries.

Max rows

Return the max rows from the getMaxRows method. This value is used as default in Search.

Fetch size

Return the fetch size from the getFetchSize method. This value is used as default in Search.

Batch size

Return the batch size from the getBatchSize method. This value is used as default in Batch insert, Batch update and Batch delete.

Providing entity listeners

Return a EntityListenerProvider from the getEntityListenerProvider method. When you want to get entity listeners from a dependency injection container, create your own EntityListenerProvider.

The default EntityListenerProvider get the entity listener from the accepted supplier.

Loading JDBC drivers

All JDBC drivers are loaded automatically by the service provider mechanism.

Warning

But in the specific environment, the mechanism doesn’t work appropriately. For example, when you use Apache Tomcat, you will find the case. See also: DriverManager, the service provider mechanism and memory leaks

Configuration definition

Simple definition

The simple definition is appropriate in following cases:

  • The configuration instance isn’t managed in the dependency injection container
  • Local transactions is used
@SingletonConfig
public class AppConfig implements Config {

    private static final AppConfig CONFIG = new AppConfig();

    private final Dialect dialect;

    private final LocalTransactionDataSource dataSource;

    private final TransactionManager transactionManager;

    private AppConfig() {
        dialect = new H2Dialect();
        dataSource = new LocalTransactionDataSource(
                "jdbc:h2:mem:tutorial;DB_CLOSE_DELAY=-1", "sa", null);
        transactionManager = new LocalTransactionManager(
                dataSource.getLocalTransaction(getJdbcLogger()));
    }

    @Override
    public Dialect getDialect() {
        return dialect;
    }

    @Override
    public DataSource getDataSource() {
        return dataSource;
    }

    @Override
    public TransactionManager getTransactionManager() {
        return transactionManager;
    }

    public static AppConfig singleton() {
        return CONFIG;
    }
}

Note

Remember to annotate the class with @SingletonConfig

Specify the above class to the config element of @Dao.

@Dao(config = AppConfig.class)
public interface EmployeeDao {

    @Select
    Employee selectById(Integer id);
}

Advanced definition

The advanced definition is appropriate in following cases:

  • The configuration instance is managed as a singleton object in the dependency injection container
  • The transaction manager is provided from the application server or framework you use

Suppose the dialect and the dataSource are injected by the dependency injection container:

public class AppConfig implements Config {

    private Dialect dialect;

    private DataSource dataSource;

    @Override
    public Dialect getDialect() {
        return dialect;
    }

    public void setDialect(Dialect dialect) {
        this.dialect = dialect;
    }

    @Override
    public DataSource getDataSource() {
        return dataSource;
    }

    public void setDataSource(DataSource dataSource) {
        this.dataSource = dataSource;
    }
}

To inject the instance of the above class to your DAO implementation instance, you have to annotate your DAO interfaces with @AnnotateWith:

@Dao
@AnnotateWith(annotations = {
    @Annotation(target = AnnotationTarget.CONSTRUCTOR, type = javax.inject.Inject.class),
    @Annotation(target = AnnotationTarget.CONSTRUCTOR_PARAMETER, type = javax.inject.Named.class, elements = "\"config\"") })
public interface EmployeeDao {

    @Select
    Employee selectById(Integer id);
}
@Dao
@AnnotateWith(annotations = {
    @Annotation(target = AnnotationTarget.CONSTRUCTOR, type = javax.inject.Inject.class),
    @Annotation(target = AnnotationTarget.CONSTRUCTOR_PARAMETER, type = javax.inject.Named.class, elements = "\"config\"") })
public interface DepartmentDao {

    @Select
    Department selectById(Integer id);
}

To avoid annotating your DAO interfaces with @AnnotateWith repeatedly, annotate the arbitrary annotation with it only once:

@AnnotateWith(annotations = {
    @Annotation(target = AnnotationTarget.CONSTRUCTOR, type = javax.inject.Inject.class),
    @Annotation(target = AnnotationTarget.CONSTRUCTOR_PARAMETER, type = javax.inject.Named.class, elements = "\"config\"") })
public @interface InjectConfig {
}

Then, you can annotate your DAO interfaces with the above @InjectConfig annotation:

@Dao
@InjectConfig
public interface EmployeeDao {

    @Select
    Employee selectById(Integer id);
}
@Dao
@InjectConfig
public interface DepartmentDao {

    @Select
    Department selectById(Integer id);
}

Basic classes

Overview

The Java types can be mapped to database column types are called “Basic classes” in Doma.

List of basic classes

  • primitive types except char
  • wrapper class for above primitive types
  • enum types
  • byte[]
  • java.lang.String
  • java.lang.Object
  • java.math.BigDecimal
  • java.math.BigInteger
  • java.time.LocalDate
  • java.time.LocalTime
  • java.time.LocalDateTime
  • java.sql.Date
  • java.sql.Time
  • java.sql.Timestamp
  • java.sql.Array
  • java.sql.Blob
  • java.sql.Clob
  • java.sql.SQLXML
  • java.util.Date

Differences between temporal classes

java.time.LocalDate:
 represents SQL DATE
java.time.LocalTime:
 represents SQL TIME
java.time.LocalDateTime:
 represents SQL TIMESTAMP and may hold nanoseconds if RDBMS supports it
java.sql.Date:represents SQL DATE
java.sql.Time:represents SQL TIME
java.sql.Timestamp:
 represents SQL TIMESTAMP and may hold nanoseconds if RDBMS supports it
java.util.Date:represents SQL TIMESTAMP and doesn’t hold nanoseconds

Examples

Using in entity class

@Entity
public class Employee {

    @Id
    Integer employeeId;

    Optional<String> employeeName;

    @Version
    Long versionNo;

    ...
}

Using in domain class

@Domain(valueType = String.class)
public class PhoneNumber {

    private final String value;

    public PhoneNumber(String value) {
        this.value = value;
    }

    public String getValue() {
        return value;
    }
}

Using in DAO interface

@Dao(config = AppConfig.class)
public interface EmployeeDao {

    @Select
    Employee selectById(Integer employeeId);

    @Select
    List<String> selectAllName();
}

Domain classes

A domain class represents a table column and it allows you to handle the column value as a Java object. In the Doma framework, a domain means all the values which a data type may contain. In short, a domain class is a user defined class that can be map to a column. The use of the domain classes is optional.

Every domain class is either an internal domain class or an external domain class.

Internal domain classes

The internal domain class must be annotated with @Domain. The @Domain’s valueType element corresponds to a data type of a column. Specify any type of Basic classes to the valueType element.

Instantiation with a constructor

The default value of the @Domain’s factoryMethod element is new. The value new means that the object of annotated class is created with a constructor.

@Domain(valueType = String.class)
public class PhoneNumber {

    private final String value;

    public PhoneNumber(String value) {
        this.value = value;
    }

    public String getValue() {
        return value;
    }

    public String getAreaCode() {
       ...
    }
}

Instantiation with a static factory method

To create the object of annotated class with a static factory method, specify the method name to the @Domain’s factoryMethod element.

The method must be static and non-private:

@Domain(valueType = String.class, factoryMethod = "of")
public class PhoneNumber {

    private final String value;

    private PhoneNumber(String value) {
        this.value = value;
    }

    public String getValue() {
        return value;
    }

    public String getAreaCode() {
       ...
    }

    public static PhoneNumber of(String value) {
        return new PhoneNumber(value);
    }
}

With a static factory method, you can apply the @Domain annotation to enum types:

@Domain(valueType = String.class, factoryMethod = "of")
public enum JobType {
    SALESMAN("10"),
    MANAGER("20"),
    ANALYST("30"),
    PRESIDENT("40"),
    CLERK("50");

    private final String value;

    private JobType(String value) {
        this.value = value;
    }

    public static JobType of(String value) {
        for (JobType jobType : JobType.values()) {
            if (jobType.value.equals(value)) {
                return jobType;
            }
        }
        throw new IllegalArgumentException(value);
    }

    public String getValue() {
        return value;
    }
}

Using type parameters in internal domain classes

All internal domain class declarations have type parameters:

@Domain(valueType = int.class)
public class Identity<T> {

    private final int value;

    public Identity(int value) {
        this.value = value;
    }

    public int getValue() {
        return value;
    }
}

When you create the object of annotated class with a static factory method, the method declaration must have same type parameters that are declared in the class declaration:

@Domain(valueType = int.class, factoryMethod = "of")
public class Identity<T> {

    private final int value;

    private Identity(int value) {
        this.value = value;
    }

    public int getValue() {
        return value;
    }

    public static <T> Identity<T> of(int value) {
        return new Identity<T>(value);
    }
}

External domain classes

This feature allows you to define arbitrary classes as domain classes, even if the classes can be annotated with the @Domain annotation.

To define external domain classes, you have to do as follows:

  • Create a class that implements org.seasar.doma.jdbc.domain.DomainConverter and annotate @ExternalDomain to the class
  • Create a class that is annotated with @DomainConverters
  • Specify the class annotated with @ExternalDomain to the @DomainConverters’s value element
  • Specify the full qualified name of the class annotated with @DomainConverters to the option of Annotation processing

Suppose, for instance, there is the PhoneNumber class that you can change:

public class PhoneNumber {

    private final String value;

    public PhoneNumber(String value) {
        this.value = value;
    }

    public String getValue() {
        return value;
    }

    public String getAreaCode() {
       ...
    }
}

First, to define the PhoneNumber class as an external domain class, create following class:

@ExternalDomain
public class PhoneNumberConverter implements DomainConverter<PhoneNumber, String> {

    public String fromDomainToValue(PhoneNumber domain) {
        return domain.getValue();
    }

    public PhoneNumber fromValueToDomain(String value) {
        if (value == null) {
            return null;
        }
        return new PhoneNumber(value);
    }
}

Then create following class and specify the above class to the @DomainConverters’s value element:

@DomainConverters({ PhoneNumberConverter.class })
public class DomainConvertersProvider {
}

Finally, specify the full qualified name of the above class to the option of Annotation processing. If you use Gradle, specify the option in the build script as follows:

compileJava {
    options {
        compilerArgs = ['-Adoma.domain.converters=example.DomainConvertersProvider']
    }
}

Using type parameters in external domain classes

All external domain class declarations have type parameters:

public class Identity<T> {

    private final int value;

    public Identity(int value) {
        this.value = value;
    }

    public int getValue() {
        return value;
    }
}

In the DomainConverter implementation class, specify a wildcard ? as type arguments to the external domain class:

@ExternalDomain
public class IdentityConverter implements DomainConverter<Identity<?>, String> {

    public String fromDomainToValue(Identity<?> domain) {
        return domain.getValue();
    }

    @SuppressWarnings("rawtypes")
    public Identity<?> fromValueToDomain(String value) {
        if (value == null) {
            return null;
        }
        return new Identity(value);
    }
}

Example

The Domain classes showed above are used as follows:

@Entity
public class Employee {

    @Id
    Identity<Employee> employeeId;

    String employeeName;

    PhoneNumber phoneNumber;

    JobType jobType;

    @Version
    Integer versionNo();

    ...
}
@Dao(config = AppConfig.class)
public interface EmployeeDao {

    @Select
    Employee selectById(Identity<Employee> employeeId);

    @Select
    Employee selectByPhoneNumber(PhoneNumber phoneNumber);

    @Select
    List<PhoneNumber> selectAllPhoneNumber();

    @Select
    Employee selectByJobType(JobType jobType);

    @Select
    List<JobType> selectAllJobTypes();
}

Embeddable classes

Embeddable classes group the properties for Entity classes.

Embeddable definition

The following code snippet shows how to define an embeddable:

@Embeddable
public class Address {

    final String city;

    final String street;

    @Column(name = "ZIP_CODE")
    final String zip;

    public Address(String city, String street, String zip) {
        this.city = city;
        this.street = street;
        this.zip = zip;
    }
}

The embeddable class is used as the entity field type:

@Entity
public class Employee {
    @Id
    Integer id;

    Address address;
}

The above entity definition is equivalent to following one:

@Entity
public class Employee {
    @Id
    Integer id;

    String city;

    String street;

    @Column(name = "ZIP_CODE")
    String zip;
}

Naming convention

A naming convention is inherited from the enclosing Entity classes.

Field definition

By default, the fields are persistent and correspond to the database columns or result set columns.

The field type must be one of the following:

@Embeddable
public class Address {
    ...
    String street;
}

Column

You can specify the corresponding column name with the @Column annotation:

@Column(name = "ZIP_CODE")
final String zip;

Transient

If an embeddable has fields that you don’t want to persist, you can annotate them using @Transient:

Method definition

There are no limitations in the use of methods.

Example

Employee employee = new Employee(); // Entity
Address address = new Address("Tokyo", "Yaesu", "103-0028"); // Embeddable
employee.setAddress(address);

Entity classes

Entity classes correspond to database tables or query result sets.

Entity definition

The following code snippet shows how to define an entity:

@Entity
public class Employee {
    ...
}

An entity class can inherit other entity class.

The following code snippet shows how to inherit other entity class:

@Entity
public class SkilledEmployee extends Employee {
    ...
}

Entity listeners

Entity listeners work before/after Doma issues the database modification statements - INSERT, DELETE and UPDATE.

The following code snippet shows how to define an entity listener:

public class EmployeeEntityListener implements EntityListener<Employee> {
    ...
}

To use the entity listener, specify it to the listener property within the @Entity annotation:

@Entity(listener = EmployeeEntityListener.class)
public class Employee {
    ...
}

An entity subclass inherits parent`s entity listener.

Naming convention

Naming convention maps the names between:

  • the database tables and the Java entity classes
  • the database column and the Java entity fields

The following code snippet shows how to apply the naming convention to an entity:

@Entity(naming = NamingType.SNAKE_UPPER_CASE)
public class EmployeeInfo {
    ...
}

When the name property within the @Table or @Column annotation is explicitly specified, the naming convention is ignored.

An entity subclass inherits parent’s naming convention.

Immutable

An entity class can be immutable.

The following code snippet shows how to define an immutable entity:

@Entity(immutable = true)
public class Employee {
    @Id
    final Integer id;
    final String name;
    @Version
    final Integer version;

    public Employee(Integer id, String name, Integer version) {
        this.id = id;
        this.name = name;
        this.version = version;
    }
    ...
}

The immutable property within the @Entity annotation must be true. The persistent field must be final.

An entity subclass inherits parent’s immutable property.

Table

You can specify the corresponding table name with the @Table annotation:

@Entity
@Table(name = "EMP")
public class Employee {
    ...
}

Without the @Table annotation, the table name is resolved by Naming Convention.

Field definition

By default, the fields are persistent and correspond to the database columns or result set columns.

The field type must be one of the following:

The following code snippet shows how to define a filed:

@Entity
public class Employee {
    ...
    Integer employeeId;
}

Column

You can specify the corresponding column name with the @Column annotation:

@Column(name = "ENAME")
String employeeName;

To exclude fields from INSERT or UPDATE statements, specify false to the insertable or updatable property within the @Column annotation:

@Column(insertable = false, updatable = false)
String employeeName;

Without the @Column annotation, the column name is resolved by Naming Convention.

Note

When the filed type is Embeddable classes, you cannot specify the @Column annotation to the field.

Id

The database primary keys are represented with the @Id annotation:

@Id
Integer id;

When there is a composite primary key, use the @Id annotation many times:

@Id
Integer id;

@Id
Integer id2;

Note

When the filed type is Embeddable classes, you cannot specify the @Id annotation to the field.

Id generation

You can instruct Doma to generate id values automatically using the @GeneratedValue annotation.

The field type must be one of the following:

  • the subclass of java.lang.Number
  • Domain classes, whose value type is the subclass of java.lang.Number
  • java.util.Optional, whose element is either above types
  • OptionalInt
  • OptionalLong
  • OptionalDouble
  • the primitive types for number

Note

The generated values are assign to the field only when the field is either null or less than 0. If you use one of the primitive types as filed type, initialize the field with tha value that is less than 0, such as -1.

Id generation by IDENTITY

To generate values using the RDBMS IDENTITY function, specify the GenerationType.IDENTITY enum value to strategy property within the @GeneratedValue:

@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
Integer id;

In advance, define the database primary key as IDENTITY.

Warning

All RDBMS does’t support the IDENTITY function.

Id generation by SEQUENCE

To generate values using the RDBMS SEQUENCE, specify the GenerationType.SEQUENCE enum value to strategy property within the @GeneratedValue annotation. And use the @SequenceGenerator annotation:

@Id
@GeneratedValue(strategy = GenerationType.SEQUENCE)
@SequenceGenerator(sequence = "EMPLOYEE_SEQ")
Integer id;

In advance, define the SEQUENCE in the database. The SEQUENCE definitions such as the name, the allocation size and the initial size must correspond the properties within the @SequenceGenerator annotation.

Warning

All RDBMS does’t support the SEQUENCE.

Id generation by TABLE

To generate values using the RDBMS TABLE, specify the GenerationType.TABLE enum value to strategy property within the @GeneratedValue annotation. And use the @TableGenerator annotation:

@Id
@GeneratedValue(strategy = GenerationType.TABLE)
@TableGenerator(pkColumnValue = "EMPLOYEE_ID")
Integer id;

In advance, define the TABLE in the database. The TABLE`s definition must correspond to the properties within the @TableGenerator annotation. For example, the DDL should be following:

CREATE TABLE ID_GENERATOR(PK VARCHAR(20) NOT NULL PRIMARY KEY, VALUE INTEGER NOT NULL);

You can change the table name and the column names using the properties within the @TableGenerator annotation.

Version

The version fields for optimistic locking are represented with the @Version annotation.

The field type must be one of the following:

  • the subclass of java.lang.Number
  • Domain classes, whose value type is the subclass of java.lang.Number
  • java.util.Optional, whose element is either above types
  • OptionalInt
  • OptionalLong
  • OptionalDouble
  • the primitive types for number
@Version
Integer version;

Note

When the filed type is Embeddable classes, you cannot specify the @Version annotation to the field.

Tenant Id

The tenant id fields are represented with the @TenantId annotation. The column corresponding to the annotated field is included in the WHERE clause of UPDATE and DELETE stetements.

@TenantId
String tenantId;

Note

When the filed type is Embeddable classes, you cannot specify the @TenantId annotation to the field.

Transient

If an entity has fields that you don’t want to persist, you can annotate them using @Transient:

@Transient
List<String> nameList;

OriginalStates

If you want to include only changed values in UPDATE statements, you can define fields annotated with @OriginalStates. The fields can hold the original values that were fetched from the database.

Doma uses the values to know which values are changed in the application and includes the only changed values in UPDATE statements.

The following code snippet shows how to define original states:

@OriginalStates
Employee originalStates;

The field type must be the same as the entity type.

Method definition

There are no limitations in the use of methods.

Example

Instantiate the Employee entity class and use its instance:

Employee employee = new Employee();
employee.setEmployeeId(1);
employee.setEmployeeName("SMITH");
employee.setSalary(new BigDecimal(1000));

Dao interfaces

Data Access Object (Dao) is interface for access to database.

Dao definition

Dao is defined as interface annotated @Dao.

Class implemented dao interface is generated in compile time by apt.

Query definition

Queries can be defined using annotation.

You use Query builders in default method if you want to build query freely in Java code.

Default method

You can write java code freely in default method.

You can get Config instance associated dao instance if you call Config.get with argument dao instance.

@Dao(config = AppConfig.class)
public interface EmployeeDao {

    default int count() {
        Config config = Config.get(this);
        SelectBuilder builder = SelectBuilder.newInstance(config);
        builder.sql("select count(*) from employee");
        return builder.getScalarSingleResult(int.class);
    }
}

Example

Implementation class is generated by annotation processor on compile. Implementation class is instantiated and used. But if configuration class is managed by DI container then it should be controlled to instantiate implementation class by DI container.

EmployeeDao employeeDao = new EmployeeDaoImpl();
Employee employee = employeeDao.selectById(1);

In default, implementation class name is interface name suffixed with Impl. Please refer Annotation processing to change package and suffix.

If you use default constructor then DataSource is determined by configuration in config element of @Dao. But it can instantiate with DataSource specified explicitly.

DataSource dataSource = ...;
EmployeeDao employeeDao = new EmployeeDaoImpl(dataSource);
Employee employee = employeeDao.selectById(1);

And also, it can instantiate with Connection specified explicitly.

Connection connection = ...;
EmployeeDao employeeDao = new EmployeeDaoImpl(connection);
Employee employee = employeeDao.selectById(1);

Dao interface is no need to define as one to one with entity class. One dao interface can handle more than one entity classes.

@Dao(config = AppConfig.class)
public interface MyDao {

    @Select
    Employee selectEmployeeById(int id);

    @Select
    Department selectDepartmentByName(String name);

    @Update
    int updateAddress(Address address);
}

Queries

Insert

Annotate with @Insert to Dao method for execute insert.

@Config(config = AppConfig.class)
public interface EmployeeDao {
    @Insert
    int insert(Employee employee);

    @Insert
    Result<ImmutableEmployee> insert(ImmutableEmployee employee);
}

By default insert statement is auto generated. You can mapping arbitrary SQL file by setting true to sqlFile element of @Insert.

Entity listener preInsert method is called when before executing insert if the entity listener is specified Entity classes parameter. Also entity listener postInsert method is called when after executing insert.

Return value

Return value must be org.seasar.doma.jdbc.Result that make the entity class an element if parameter is immutable entity class.

Return value must be int that is represented updated count if the above conditions are not satisfied.

Insert by auto generated SQL

Parameter type must be entity class. Specifiable parameter is only one. Parameter must not be null.

@Insert
int insert(Employee employee);

@Insert
Result<ImmutableEmployee> insert(ImmutableEmployee employee);
Identifier

Identifier is auto generated and setting if Entity classes identifier is annotated with @GeneratedValue.

Reference Id generation about cautionary point.

Version numbers

If value that explicitly set is over 0 then use the value if Entity classes has property that is annotated with @Version.

If the value is not set or is less than 0 the value is set 1 automatically.

Control insertion target property
insertable

Property that is set false to insertable element of @Column is excluded from insertion if entity class has property that is annotated with @Column.

exclude

Property that is specified in exclude element of @Insert is excluded from insertion. Even if insertable element of @Column is true the property is excluded from insertion if the property is specified by this element.

@Insert(exclude = {"name", "salary"})
int insert(Employee employee);
include

Property that is specified in include element of @Insert is included to insertion. If same property are specified in both of include element and exclude element of @Insert the property is excluded from insertion.

Even if property is specified in this element the property is excluded from insertion if insertable element of @Column is false.

@Insert(include = {"name", "salary"})
int insert(Employee employee);
excludeNull

Property that value is null is excluded from insertion if excludeNull element of @Insert is true. If this element is true, even if insertable element of @Column is true or property is specified in include element of @Insert the property is excluded from insertion if value is null.

@Insert(excludeNull = true)
int insert(Employee employee);

Insert by SQL file

To execute insertion by SQL file, you set true to sqlFile element of @Insert and prepare SQL file that correspond method.

You can use arbitrary type as parameter. Specifiable parameters count is no limit. You can set null to parameter if parameter type is basic type or domain class. For other type than that, parameter must not be null.

@Insert(sqlFile = true)
int insert(Employee employee);

@Insert(sqlFile = true)
Result<ImmutableEmployee> insert(ImmutableEmployee employee);

For example, you describe SQL file like below to correspond above method.

insert into employee (id, name, salary, version)
values (/* employee.id */0,
        /* employee.name */'hoge',
        /* employee.salary */100,
        /* employee.version */0)

Identifier auto setting and version value auto setting is not done in insertion by SQL file. Also, exclude element and include element and excludeNull element of @Insert are not referenced.

Unique constraint violation

UniqueConstraintException is thrown regardless with or without using sql file if unique constraint violation is occured.

Query timeout

You can specify second of query timeout to queryTimeout element of @Insert.

@Insert(queryTimeout = 10)
int insert(Employee employee);

This specifying is applied regardless with or without using sql file. Query timeout that is specified in Configuration is used if queryTimeout element is not set value.

SQL log output format

You can specify SQL log output format to sqlLog element of @Insert.

@Insert(sqlLog = SqlLogType.RAW)
int insert(Employee employee);

SqlLogType.RAW is represented that the log is outputted sql with a bind parameter.

Update

Annotate with @Update to Dao method for execute update.

@Config(config = AppConfig.class)
public interface EmployeeDao {
    @Update
    int update(Employee employee);

    @Update
    Result<ImmutableEmployee> update(ImmutableEmployee employee);
}

By default UPDATE statement is auto generated. You can mapping arbitrary SQL file by specifying true to sqlFile property within the @Update annotation.

The preUpdate method of entity listener is called when before executing update if the entity listener is specified at entity class parameter. Also the postUpdate method of entity listener method is called when after executing update.

Return value

Return value must be org.seasar.doma.jdbc.Result that has entity class as an element if parameter is immutable entity class.

Return value must be int that is represented updated count if the above conditions are not satisfied.

Update by auto generated SQL

Parameter type must be entity class. Specifiable parameter is only one. Parameter must not be null.

@Update
int update(Employee employee);

@Update
Result<ImmutableEmployee> update(ImmutableEmployee employee);
Version number and optimistic concurrency control in auto generated SQL

Optimistic concurrency control is executed if you satisfied below conditions.

  • Entity class within parameter has property that is annotated with @Version
  • The ignoreVersion element within @Update annotation is false

If optimistic concurrency control is enable, version number is included with identifier in update condition and is updated increment by 1. OptimisticLockException representing optimistic concurrency control failure is thrown, if at that time updated count is 0. Also, OptimisticLockException is not thrown and version property within entity is increment by 1 if updated count is not 0.

ignoreVersion

If ignoreVersion property within @Update annotation is true, version number is not include in update condition and be included in SET clauses within UPDATE statement. Version number is updated by setting value at application. OptimisticLockException is not thrown in this case, even if update count is 0.

@Update(ignoreVersion = true)
int update(Employee employee);
suppressOptimisticLockException

In case of suppressOptimisticLockException property within @Update is true, if property that annotated with @Version is exists then version number is include in update condition and be increment by 1 but OptimisticLockException is not thrown even if update count is 0. However, version property value within entity is increment by 1.

@Update(suppressOptimisticLockException = true)
int update(Employee employee);
Control updating target property
updatable

The updatable property within @Column annotation that is specified false is excluded from updating target if entity class has property that is annotated with @Column.

exclude

Property that is specified with exclude property within the @Update annotation is excluded from updating target. Even if updatable property within @Column annotation is specified true the property is excluded from updating target if the property is specified by this element.

@Update(exclude = {"name", "salary"})
int update(Employee employee);
include

Only property that is specified with include property within @Update annotation is included to updating target. If same property are specified with both of include property and exclude property within @Update the property is excluded from updating target. Even if property is specified with this element the property is excluded from updating target if updatable property within @Column annotation is false.

@Update(include = {"name", "salary"})
int update(Employee employee);
excludeNull

Property that value is null is excluded from updating target if excludeNull property within @Update annotation is specified true. If this element is true, even if updatable property within @Column annotation is specified true or property is specified with include property within @Update annotation the property is excluded from insertion target if value is null.

@Update(excludeNull = true)
int update(Employee employee);
includeUnchanged

This element is enable only if property that annotated with @OriginalStates is exists within updating target entity class.

All property within entity is updating target if this element is true. That is, the column corresponding to all property is included in SET clauses within UPDATE statement.

Only properties that have actually changed since the entity is updating target if this element is false. That is, only the column corresponding to modified property is included in SET clauses within UPDATE statement.

@Update(includeUnchanged = true)
int update(Employee employee);

Update by SQL file

To execute updating by SQL file, you set true to sqlFile property within @Update annotation and prepare SQL file that correspond method.

Note

In updating by SQL file, rule is different with or without use ref:populate.

Case of using comment that generating update column list

First parameter type must be entity class. Specifiable parameters count is no limit. You can set null to parameter if parameter type is basic type or domain class. Parameter must not be null if the type is other than that.

@Update(sqlFile = true)
int update(Employee employee, BigDecimal salary);

@Update(sqlFile = true)
Result<ImmutableEmployee> update(ImmutableEmployee employee, , BigDecimal salary);

For example, you describe SQL file like below to correspond above method.

update employee set /*%populate*/ id = id where salary > /* salary */0

The rule about controlling updating target property is same as Update by auto generated SQL.

Case of not using comment that generating update column list

You can use arbitrary type as parameter. Specifiable parameters count is no limit. You can set null to parameter if parameter type is basic type or domain class. Parameter must not be null if the type is other than that.

@Update(sqlFile = true)
int update(Employee employee);

@Update(sqlFile = true)
Result<ImmutableEmployee> update(ImmutableEmployee employee);

For example, you describe SQL file like below to correspond above method.

update employee set name = /* employee.name */'hoge', salary = /* employee.salary */100
where id = /* employee.id */0

exclude property and include property, excludeNull property, includeUnchanged property they are within @Update annotation are not referenced in updating by SQL file.

Version number and optimistic concurrency control in SQL file

Optimistic concurrency control is executed if you satisfied below conditions.

  • Entity class is included in parameter.
  • Entity class at first from the left within parameter has property that is annotated with @Version
  • The ignoreVersion element within @Update annotation is false

However, describing to SQL file for Optimistic concurrency control SQL is application developer’s responsibility. For example like below SQL, you must specify version number in WHERE clauses and increment version number by 1 in SET clauses.

update EMPLOYEE set DELETE_FLAG = 1, VERSION = /* employee.version */1 + 1
where ID = /* employee.id */1 and VERSION = /* employee.version */1

OptimisticLockException representing optimistic concurrency control failure is thrown, if this SQL updated count is 0. OptimisticLockException is not thrown and version property within entity is increment by 1 if updated count is not 0.

ignoreVersion

If ignoreVersion property within @Update annotation is true, OptimisticLockException is not thrown even if update count is 0. Also, version property value within entity is not modified.

@Update(sqlFile = true, ignoreVersion = true)
int update(Employee employee);
suppressOptimisticLockException

If suppressOptimisticLockException property within @Update annotation is true, OptimisticLockException is not thrown even if update count is 0. However, version property value within entity is incremented by 1.

@Update(sqlFile = true, suppressOptimisticLockException = true)
int update(Employee employee);

Unique constraint violation

UniqueConstraintException is thrown regardless of with or without using sql file if unique constraint violation is occurred.

Query timeout

You can specify seconds of query timeout to queryTimeout property within @Update annotation.

@Update(queryTimeout = 10)
int update(Employee employee);

This specifying is applied regardless of with or without using sql file. Query timeout that is specified in Configuration is used if queryTimeout property is not set value.

SQL log output format

You can specify SQL log output format to sqlLog property within @Update annotation.

@Update(sqlLog = SqlLogType.RAW)
int update(Employee employee);

SqlLogType.RAW represent outputting log that is sql with a binding parameter.

Delete

Annotate with @Delete to Dao method for execute delete.

@Config(config = AppConfig.class)
public interface EmployeeDao {
    @Delete
    int delete(Employee employee);
}

By default DELETE statement is auto generated. You can mapping arbitrary SQL file by specifying true to sqlFile property within the @Delete annotation.

The preDelete method of entity listener is called when before executing delete if the entity listener is specified at entity class parameter. Also the postDelete method of entity listener is called when after executing delete.

Return value

Return value must be int that represent update count.

Delete by auto generated SQL

Parameter type must be entity class. Specifiable parameter is only one. Parameter must not be null.

@Delete
int delete(Employee employee);

@Delete
Result<ImmutableEmployee> delete(ImmutableEmployee employee);
Version number and optimistic concurrency control in auto generated SQL

Optimistic concurrency control is executed if you satisfied below conditions.

  • Entity class within parameter has property that is annotated with @Version
  • The ignoreVersion element within @Delete annotation is false

If optimistic concurrency control is enable, version number is included with identifier in delete condition. OptimisticLockException representing optimistic concurrency control failure is thrown, if at that time delete count is 0.

ignoreVersion

If ignoreVersion property within @Delete annotation is true, version number is not include in delete condition. OptimisticLockException is not thrown in this case, even if delete count is 0.

@Delete(ignoreVersion = true)
int delete(Employee employee);
suppressOptimisticLockException

If suppressOptimisticLockException property within @Delete is true, version number is included in delete condition. But in this case OptimisticLockException is not thrown even if delete count is 0.

@Delete(suppressOptimisticLockException = true)
int delete(Employee employee);

Delete by SQL file

To execute deleting by SQL file, you set true to sqlFile property within @Delete annotation and prepare SQL file that correspond method.

You can use arbitrary type as parameter. Specifiable parameters count is no limit. You can set null to parameter if parameter type is basic type or domain class. Parameter must not be null if the type is other than that.

Entity listener method is not called even if the entity listener is specified to entity.

@Delete(sqlFile = true)
int delete(Employee employee);

For example, you describe SQL file like below to correspond above method.

delete from employee where name = /* employee.name */'hoge'
Version number and optimistic concurrency control in SQL File

Optimistic concurrency control is executed if you satisfied below conditions.

  • Entity class is included in parameter
  • Entity class at first from the left within parameter has property that is annotated with @Version
  • The ignoreVersion property within @Delete annotation is false
  • The suppressOptimisticLockException property within @Delete annotation is false

However, describing to SQL file for Optimistic concurrency control SQL is application developer’s responsibility. For example like below SQL, you must specify version number in WHERE clauses.

delete from EMPLOYEE where ID = /* employee.id */1 and VERSION = /* employee.version */1

OptimisticLockException representing optimistic concurrency control failure is thrown, if this SQL delete count is 0. OptimisticLockException is not thrown if delete count is not 0.

ignoreVersion

If ignoreVersion property within @Delete annotation is true, OptimisticLockException is not thrown even if delete count is 0.

@Delete(sqlFile = true, ignoreVersion = true)
int delete(Employee employee);
suppressOptimisticLockException

If suppressOptimisticLockException property within @Delete annotation is true, OptimisticLockException is not thrown even if delete count is 0.

@Delete(sqlFile = true, suppressOptimisticLockException = true)
int delete(Employee employee);

Query timeout

You can specify seconds of query timeout to queryTimeout property within @Delete annotation.

@Delete(queryTimeout = 10)
int delete(Employee employee);

This specifying is applied regardless of with or without using sql file. Query timeout that is specified in Configuration is used if queryTimeout property is not set value.

SQL log output format

You can specify SQL log output format to sqlLog property within @Delete annotation.

@Delete(sqlLog = SqlLogType.RAW)
int delete(Employee employee);

SqlLogType.RAW represent outputting log that is sql with a binding parameter.

Batch insert

Annotate with @BatchInsert to Dao method for execute batch insert.

@Config(config = AppConfig.class)
public interface EmployeeDao {
    @BatchInsert
    int[] insert(List<Employee> employees);

    @BatchInsert
    BatchResult<ImmutableEmployee> insert(List<ImmutableEmployee> employees);
}

By default INSERT statement is auto generated. You can mapping arbitrary SQL file by specifying true to sqlFile property within the @BatchInsert annotation.

The preInsert method of entity listener is called each entity when before executing insert if the entity listener is specified at Entity classes parameter. Also the postInsert method of entity listener method is called each entity when after executing insert.

Return value

Return value must be org.seasar.doma.jdbc.BatchResult that has entity class as an element if parameter Iterable subtype element is immutable entity class.

Return value must be int[] that is represented each inserting process’s updated count if the above conditions are not satisfied.

Batch insert by auto generated SQL

Parameter type must be java.lang.Iterable subtype that has Entity classes as an element. Specifiable parameter is only one. Parameter must not be null. Return value array element count become equal Iterable element count. Insert count is returned to array each element.

Identifier

If annotated with @GeneratedValue at Entity classes identifier, the identifier is auto generated and set.

You reference Id generation about cautionary point.

Version number

If value that explicitly set is over 0 then use the value if Entity classes has property that is annotated with @Version. If the value is not set or is less than 0 the value is set 1 automatically.

Insert target property
insertable

The insertable property within @Column annotation that is specified false is excluded from insert target if Entity classes has property that is annotated with @Column.

exclude

Property that is specified with exclude property within the @BatchInsert annotation is excluded from inserting target. Even if insertable property within @Column annotation is specified true the property is excluded from inserting target if the property is specified by this element.

@BatchInsert(exclude = {"name", "salary"})
int[] insert(List<Employee> employees);
include

Only property that is specified with include property within @BatchInsert annotation is included to inserting target. If same property are specified with both of include property and exclude property within @BatchInsert the property is excluded from updating target. Even if property is specified with this element the property is excluded from inserting target if insertable property within @Column annotation is false.

@BatchInsert(include = {"name", "salary"})
int[] insert(List<Employee> employees);

Batch insert by SQL file

To execute batch inserting by SQL file, you set true to sqlFile property within @BatchInsert annotation and prepare SQL file that correspond method.

@BatchInsert(sqlFile = true)
int[] insert(List<Employee> employees);

@BatchInsert(sqlFile = true)
BatchResult<ImmutableEmployee> insert(List<ImmutableEmployee> employees);

Parameter type must be java.lang.Iterable subtype that has Entity classes as an element. Specifiable parameter is only one. Parameter must not be null. Return value array element count become equal Iterable element count. Insert count is returned to array each element.

If entity listener is specified at Entity classes then entity listener method is not called.

For example, you describe SQL like below to correspond above method.

insert into employee (id, name, salary, version)
values (/* employees.id */0, /* employees.name */'hoge', /* employees.salary */100, /* employees.version */0)

Parameter name indicate java.lang.Iterable subtype element in SQL file.

Identifier auto setting and version number auto setting are not executed in batch insert by SQL file. Also, exclude property and include property within @BatchInsert are not referenced.

Unique constraint violation

UniqueConstraintException is thrown regardless of with or without using sql file if unique constraint violation is occurred.

Query timeout

You can specify seconds of query timeout to queryTimeout property within @BatchInsert annotation.

@BatchInsert(queryTimeout = 10)
int[] insert(List<Employee> employees);

This specifying is applied regardless of with or without using sql file. Query timeout that is specified in config class is used if queryTimeout property is not set value.

Batch size

You can specify batch size to batchSize property within @BatchInsert annotation.

@BatchInsert(batchSize = 10)
int[] insert(List<Employee> employees);

This specify is applied Regardless of using or not using SQL file. It you do not specify the value to batchSize property, batch size that is specified at Configuration class is applied.

SQL log output format

You can specify SQL log output format to sqlLog property within @BatchInsert annotation.

@BatchInsert(sqlLog = SqlLogType.RAW)
int insert(Employee employee);

SqlLogType.RAW represent outputting log that is sql with a binding parameter.

Batch update

Annotate with @BatchUpdate to Dao method for execute batch update.

@Config(config = AppConfig.class)
public interface EmployeeDao {
    @BatchUpdate
    int[] update(List<Employee> employees);

    @BatchUpdate
    BatchResult<ImmutableEmployee> update(List<ImmutableEmployee> employees);
}

By default UPDATE statement is auto generated. You can mapping arbitrary SQL file by specifying true to sqlFile property within the @BatchUpdate annotation.

The preUpdate method of entity listener is called each entity when before executing update if the entity listener is specified at Entity classes parameter. Also the postUpdate method of entity listener method is called each entity when after executing update.

Return value

Return value must be org.seasar.doma.jdbc.BatchResult that has entity class as an element if parameter Iterable subtype element is immutable entity class.

Return value must be int[] that is represented each updating process’s updated count if the above conditions are not satisfied.

Batch update by auto generated SQL

Parameter type must be java.lang.Iterable subtype that has Entity classes as an element. Specifiable parameter is only one. Parameter must not be null. Return value array element count become equal Iterable element count. Update count is returned to array each element.

Version number and optimistic concurrency control in auto generated SQL

Optimistic concurrency control is executed if you satisfied below conditions.

  • Entity classes within parameter java.lang.Iterable subtype has property that is annotated with @Version
  • The ignoreVersion element within @BatchUpdate annotation is false

If optimistic concurrency control is enable, version number is included with identifier in update condition and is updated increment by 1. BatchOptimisticLockException representing optimistic concurrency control failure is thrown, if at that time updated count is 0. Also, BatchOptimisticLockException is not thrown and version property within entity is increment by 1 if updated count is 1.

ignoreVersion

If ignoreVersion property within @BatchUpdate annotation is true, version number is not include in update condition and be included in SET clauses within UPDATE statement. Version number is updated by setting value at application. BatchOptimisticLockException is not thrown in this case, even if update count is 0.

@BatchUpdate(ignoreVersion = true)
int[] update(List<Employee> employees);
suppressOptimisticLockException

In case of suppressOptimisticLockException property within @BatchUpdate is true, if property that annotated with @Version is exists then version number is include in update condition and be increment by 1 but BatchOptimisticLockException is not thrown even if update count is 0. However, version property value within entity is increment by 1.

@BatchUpdate(suppressOptimisticLockException = true)
int[] update(List<Employee> employees);
Update target property
updatable

The updatable property within @Column annotation that is specified false is excluded from updating target if Entity classes has property that is annotated with @Column.

exclude

Property that is specified with exclude property within the @BatchUpdate annotation is excluded from updating target. Even if updatable property within @Column annotation is specified true the property is excluded from updating target if the property is specified by this element.

@BatchUpdate(exclude = {"name", "salary"})
int[] update(List<Employee> employees);
include

Only property that is specified with include property within @BatchUpdate annotation is included to updating target. If same property are specified with both of include property and exclude property within @BatchUpdate the property is excluded from updating target. Even if property is specified with this element the property is excluded from updating target if updatable property within @Column annotation is false.

@BatchUpdate(include = {"name", "salary"})
int[] update(List<Employee> employees);

Batch update by SQL file

To execute batch updating by SQL file, you set true to sqlFile property within @BatchUpdate annotation and prepare SQL file that correspond method.

Note

In batch updating by SQL file, rule is different according to using or not using Population directive.

Case of using comment that generating update column list
@BatchUpdate(sqlFile = true)
int[] update(List<Employee> employees);

@BatchUpdate
BatchResult<ImmutableEmployee> update(List<ImmutableEmployee> employees);

Parameter type must be java.lang.Iterable subtype that has Entity classes as an element. Specifiable parameter is only one. Parameter must not be null. Return value array element count become equal Iterable element count. Update count is returned to array each element.

For example, you describe SQL like below to correspond above method.

update employee set /*%populate*/ id = id where name = /* employees.name */'hoge'

Parameter name indicate Iterable subtype element in SQL file.

The rule that is about update target property equals Batch update by auto generated SQL.

Case of not using comment that generating update column list
@BatchUpdate(sqlFile = true)
int[] update(List<Employee> employees);

@BatchUpdate
BatchResult<ImmutableEmployee> update(List<ImmutableEmployee> employees);

Parameter type must be java.lang.Iterable subtype that has arbitrary type as an element. Specifiable parameter is only one. Parameter must not be null. Return value array element count become equal Iterable element count. Update count is returned to array each element.

For example, you describe SQL like below to correspond above method.

update employee set name = /* employees.name */'hoge', salary = /* employees.salary */100
where id = /* employees.id */0

Parameter name indicate Iterable subtype element in SQL file.

Version number auto updating is not executed in batch update by SQL file. Also, exclude property and include property within @BatchUpdate annotation are not referenced.

Version number and optimistic concurrency control in SQL file

Optimistic concurrency control is executed if you satisfied below conditions.

  • java.lang.Iterable subtype element in parameter is Entity classes and has property that is annotated @Version existing at Entity classes.
  • ignoreVersion property within @BatchUpdate annotation is false.

However, describing to SQL file for Optimistic concurrency control SQL is application developer’s responsibility. For example like below SQL, you must specify version number in WHERE clauses and increment version number by 1 in SET clauses.

update EMPLOYEE set DELETE_FLAG = 1, VERSION = /* employees.version */1 + 1
where ID = /* employees.id */1 and VERSION = /* employees.version */1

BatchOptimisticLockException representing optimistic concurrency control failure is thrown, if this SQL updated count is 0. BatchOptimisticLockException is not thrown and version property within entity is increment by 1 if updated count is not 0.

If optimistic concurrency control is enable, version number is included with identifier in update condition and is updated increment by 1. BatchOptimisticLockException representing optimistic concurrency control failure is thrown, if at that time updated count is 0. On the other hand, if update count is 1, BatchOptimisticLockException is not thrown and entity version property is increment by 1.

ignoreVersion

If ignoreVersion property within @BatchUpdate annotation is true, BatchOptimisticLockException is not thrown, even if update count is 0 or multiple. Also, entity version property is not modified.

@BatchUpdate(sqlFile = true, ignoreVersion = true)
int[] update(List<Employee> employees);
suppressOptimisticLockException

In case of suppressOptimisticLockException property within @BatchUpdate is true, BatchOptimisticLockException is not thrown even if update count is 0. However, entity version property value is incremented by 1.

@BatchUpdate(sqlFile = true, suppressOptimisticLockException = true)
int[] update(List<Employee> employees);

Unique constraint violation

UniqueConstraintException is thrown regardless of with or without using sql file if unique constraint violation is occurred.

Query timeout

You can specify seconds of query timeout to queryTimeout property within @BatchUpdate annotation.

@BatchUpdate(queryTimeout = 10)
int[] update(List<Employee> employees);

This specifying is applied regardless of with or without using sql file. Query timeout that is specified in config class is used if queryTimeout property is not set value.

Batch size

You can specify batch size to batchSize property within @BatchUpdate annotation.

@BatchUpdate(batchSize = 10)
int[] update(List<Employee> employees);

This specify is applied Regardless of using or not using SQL file. It you do not specify the value to batchSize property, batch size that is specified at Configuration class is applied.

SQL log output format

You can specify SQL log output format to sqlLog property within @BatchUpdate annotation.

@BatchUpdate(sqlLog = SqlLogType.RAW)
int[] update(List<Employee> employees);

SqlLogType.RAW represent outputting log that is sql with a binding parameter.

Batch delete

Annotate with @BatchDelete to Dao method for execute batch insert.

@Config(config = AppConfig.class)
public interface EmployeeDao {
    @BatchDelete
    int[] delete(List<Employee> employees);
    ...
}

By default DELETE statement is auto generated. You can mapping arbitrary SQL file by specifying true to sqlFile property within the @BatchDelete annotation.

The preDelete method of entity listener is called each entity when before executing delete if the entity listener is specified at Entity classes parameter. Also the postDelete method of entity listener method is called each entity when after executing delete.

Return value

Return value must be int[] that is represented each deleting process’s updated count.

Batch delete by auto generated SQL

Parameter type must be java.lang.Iterable subtype that has Entity classes as an element. Specifiable parameter is only one. Parameter must not be null. Return value array element count become equal Iterable element count. Delete count is returned to array each element.

Version number and optimistic concurrency control in auto generated SQL

Optimistic concurrency control is executed if you satisfied below conditions.

  • Entity classes within parameter java.lang.Iterable subtype has property that is annotated with @Version
  • The ignoreVersion element within @BatchDelete annotation is false

If optimistic concurrency control is enable, version number is included with identifier in delete condition. BatchOptimisticLockException representing optimistic concurrency control failure is thrown, if at that time deleted count is 0.

ignoreVersion

If ignoreVersion property within @BatchDelete annotation is true, version number is not include in delete condition. BatchOptimisticLockException is not thrown, even if delete count is 0.

@BatchDelete(ignoreVersion = true)
int[] delete(List<Employee> employees);
suppressOptimisticLockException

In case of suppressOptimisticLockException property within @BatchDelete is true, version number is include in delete condition but BatchOptimisticLockException is not thrown even if delete count is 0.

@BatchDelete(suppressOptimisticLockException = true)
int[] delete(List<Employee> employees);

Batch delete by SQL file

To execute batch deleting by SQL file, you set true to sqlFile property within @BatchDelete annotation and prepare SQL file that correspond method.

@BatchDelete(sqlFile = true)
int[] delete(List<Employee> employees);

Parameter type must be java.lang.Iterable subtype that has arbitrary type as an element. Specifiable parameter is only one. Parameter must not be null. Return value array element count become equal Iterable element count. Delete count is returned to array each element.

For example, you describe SQL like below to correspond above method.

delete from employee where name = /* employees.name */'hoge'

Parameter name indicate java.lang.Iterable subtype element in SQL file.

Version number and optimistic concurrency control in SQL file

Optimistic concurrency control is executed if you satisfied below conditions.

  • The parameter java.lang.Iterable subtype has Entity classes element, the Entity classes element is annotated with @Version
  • The ignoreVersion element within @BatchDelete annotation is false

However, describing to SQL file for Optimistic concurrency control SQL is application developer’s responsibility. For example like below SQL, you must specify version number in WHERE clauses.

delete from EMPLOYEE where ID = /* employees.id */1 and VERSION = /* employees.version */1

BatchOptimisticLockException representing optimistic concurrency control failure is thrown, if deleted count is 0 or multiple in this SQL.

ignoreVersion

If ignoreVersion property within @BatchDelete annotation is true, BatchOptimisticLockException is not thrown even if deleted count is 0 or multiple.

@BatchDelete(sqlFile = true, ignoreVersion = true)
int[] delete(List<Employee> employees);
suppressOptimisticLockException

If suppressOptimisticLockException property within @BatchDelete is true, BatchOptimisticLockException is not thrown even if deleted count is 0 or multiple.

@BatchDelete(sqlFile = true, suppressOptimisticLockException = true)
int[] delete(List<Employee> employees);

Query timeout

You can specify seconds of query timeout to queryTimeout property within @BatchDelete annotation.

@BatchDelete(queryTimeout = 10)
int[] delete(List<Employee> employees);

This specifying is applied regardless of with or without using sql file. Query timeout that is specified in config class is used if queryTimeout property is not set value.

Batch size

You can specify batch size to batchSize property within @BatchDelete annotation.

@BatchDelete(batchSize = 10)
int[] delete(List<Employee> employees);

This specify is applied regardless of using or not using SQL file. It you do not specify the value to batchSize property, batch size that is specified at Configuration class is applied.

SQL log output format

You can specify SQL log output format to sqlLog property within @BatchDelete annotation.

@BatchDelete(sqlLog = SqlLogType.RAW)
int[] delete(List<Employee> employees);

SqlLogType.RAW represent outputting log that is sql with a binding parameter.

Stored function

To call stored functions, you must annotate DAO methods with the @Function annotation:

@Config(config = AppConfig.class)
public interface EmployeeDao {
    @Function
    Integer execute(@In Integer id, @InOut Reference<BigDecimal> salary);
    ...
}

Return type

If the stored function returns nothing, the return type must be void.

If the stored function returns a single result, the return type must be one of the following:

If the stored function returns a result list, the return type must be java.util.List and its element type must be one of the following:

Function name

The annotated method name is recognized as the function name by default. To override it, you can specify the value to the @Function’s name element:

@Function(name = "calculateSalary")
void execute(@In Integer id, @InOut Reference<BigDecimal> salary);

Parameters

The order of stored function’s parameters must correspond with the order of DAO method parameters.

All parameters must be annotated with one of following annotations:

  • @In
  • @InOut
  • @Out
  • @ResultSet
IN parameter

To indicate IN parameters, annotate corresponding DAO method parameters with the @In annotation. The type of the DAO method parameter must be one of the following:

Suppose you have the following definition:

@Function
void execute(@In Integer id);

You can invoke the method as follows:

EmployeeDao dao = new EmployeeDaoImpl();
dao.execute(1);
INOUT parameter

To indicate INOUT parameters, annotate corresponding DAO method parameters with the @InOut annotation. The type of the DAO method parameter must be org.seasar.doma.jdbc.Reference and its type parameter must be one of the following:

Suppose you have the following definition:

@Function
void execute(@InOut Reference<BigDecimal> salary);

You can invoke the method as follows:

EmployeeDao dao = new EmployeeDaoImpl();
BigDecimal in = new BigDecimal(100);
Reference<BigDecimal> ref = new Reference<BigDecimal>(in);
dao.execute(ref);
BigDecimal out = ref.get();
OUT parameter

To indicate OUT parameters, annotate corresponding DAO method parameters with the @Out annotation. The type of the DAO method parameter must be org.seasar.doma.jdbc.Reference and its type parameter must be one of the following:

Suppose you have the following definition:

@Function
Integer execute(@Out Reference<BigDecimal> salary);

You can invoke the method as follows:

EmployeeDao dao = new EmployeeDaoImpl();
Reference<BigDecimal> ref = new Reference<BigDecimal>();
Integer result = dao.execute(ref);
BigDecimal out = ref.get();
Cursor or result set

To indicate cursors or result sets, annotate corresponding DAO method parameters with the @ResultSet annotation.

The DAO method parameter type must be java.util.List and its element type must be one of the following:

Suppose you have the following definition:

@Function
void execute(@ResultSet List<Employee> employee);

You can invoke the method as follows:

EmployeeDao dao = new EmployeeDaoImpl();
List<Employee> employees = new ArrayList<Employee>();
dao.execute(employees);
for (Employee e : employees) {
    ...
}

Stored procedure

To call stored procedures, you must annotate DAO methods with the @Procedure annotation:

@Config(config = AppConfig.class)
public interface EmployeeDao {
    @Procedure
    void execute(@In Integer id, @InOut Reference<BigDecimal> salary);
    ...
}

Return Type

The return type must be void.

Procedure name

The annotated method name is recognized as the procedure name by default. To override it, you can specify the value to the @Procedure’s name element:

@Procedure(name = "calculateSalary")
void execute(@In Integer id, @InOut Reference<BigDecimal> salary);

Parameters

The order of stored procedure’s parameters must correspond with the order of DAO method parameters.

All parameters must be annotated with one of following annotations:

  • @In
  • @InOut
  • @Out
  • @ResultSet
IN parameter

To indicate IN parameters, annotate corresponding DAO method parameters with the @In annotation. The type of the DAO method parameter must be one of the following:

Suppose you have the following definition:

@Procedure
void execute(@In Integer id);

You can invoke the method as follows:

EmployeeDao dao = new EmployeeDaoImpl();
dao.execute(1);
INOUT parameter

To indicate INOUT parameters, annotate corresponding DAO method parameters with the @InOut annotation. The type of the DAO method parameter must be org.seasar.doma.jdbc.Reference and its type parameter must be one of the following:

Suppose you have the following definition:

@Procedure
void execute(@InOut Reference<BigDecimal> salary);

You can invoke the method as follows:

EmployeeDao dao = new EmployeeDaoImpl();
BigDecimal in = new BigDecimal(100);
Reference<BigDecimal> ref = new Reference<BigDecimal>(in);
dao.execute(ref);
BigDecimal out = ref.get();
OUT parameter

To indicate OUT parameters, annotate corresponding DAO method parameters with the @Out annotation. The type of the DAO method parameter must be org.seasar.doma.jdbc.Reference and its type parameter must be one of the following:

Suppose you have the following definition:

@Procedure
void execute(@Out Reference<BigDecimal> salary);

You can invoke the method as follows:

EmployeeDao dao = new EmployeeDaoImpl();
Reference<BigDecimal> ref = new Reference<BigDecimal>();
dao.execute(ref);
BigDecimal out = ref.get();
Cursor or result set

To indicate cursors or result sets, annotate corresponding DAO method parameters with the @ResultSet annotation.

The DAO method parameter type must be java.util.List and its element type must be one of the following:

Suppose you have the following definition:

@Procedure
void execute(@ResultSet List<Employee> employees);

You can invoke the method as follows:

EmployeeDao dao = new EmployeeDaoImpl();
List<Employee> employees = new ArrayList<Employee>();
dao.execute(employees);
for (Employee e : employees) {
    ...
}

Factories

To get instances from the factory methods of java.sql.Connection, annotate DAO methods with one of following annotations;

  • @ArrayFactory
  • @BlobFactory
  • @ClobFactory
  • @NClobFactory
  • @SQLXMLFactory

Creating Array instances

A return type must be java.sql.Array and the number of parameters must be one. The parameter type must be an array type and the parameter must not be null.

Specify a database type name to the @ArrayFactory’s typeName element:

@ArrayFactory(typeName = "integer")
Array createIntegerArray(Integer[] elements);

Creating Blob instances

A return type must be java.sql.Blob and the number of parameters must be zero:

@BlobFactory
Blob createBlob();

Creating Clob instances

A return type must be java.sql.Clob and the number of parameters must be zero:

@ClobFactory
Clob createClob();

Creating NClob instances

A return type must be java.sql.NClob and the number of parameters must be zero:

@NClobFactory
NClob createNClob();

Creating SQLXML instances

A return type must be java.sql.SQLXML and the number of parameters must be zero:

@SQLXMLFactory
SQLXML createSQLXML();

Script

To run a series of static SQL statements, annotate DAO methods with @Script:

@Config(config = AppConfig.class)
public interface EmployeeDao {
    @Script
    void createTable();
    ...
}

The return type of the method must be void and the number of parameters must be zero.

Script representation

Scripts in files
Encoding

The script files must be saved as UTF-8 encoded.

Location

The script files must be located in directories below a “META-INF” directory which is included in CLASSPATH.

Format of file path

The script file path must follow the following format:

META-INF/path-format-of-dao-interface/dao-method.script

For example, when the DAO interface name is aaa.bbb.EmployeeDao and the DAO method name is createTable, the script file path is as follows:

META-INF/aaa/bbb/EmployeeDao/createTable.script
Dependency on a specific RDBMS

You can specify a dependency on a specific RDBMS by file name. To do this, put the hyphen “-” and RDBMS name before the extension “.sql”. For example, the file path specific to PostgreSQL is as follows:

META-INF/aaa/bbb/EmployeeDao/createTable-postgres.script

The script files specific to RDBMSs are given priority. For example, in the environment where PostgreSQL is used, “META-INF/aaa/bbb/EmployeeDao/createTable-postgres.script” is chosen instead of “META-INF/aaa/bbb/EmployeeDao/createTable.script”.

See also Dependency on a specific RDBMS.

Scripts in annotation

You can specify scripts to DAO methods with the @Sql annotation:

@Config(config = AppConfig.class)
public interface EmployeeDao {
    @Sql("create table employee (id integer, name varchar(200))")
    @Script
    void createTable();
    ...
}

See also SQL templates in annotations.

Delimiter

There are two kinds of delimiters in scripts:

  • statement delimiter
  • block delimiter

The statement delimiter is always a semicolon ;. The block delimiter is determined by a Dialect instance. The RDBMS block delimiters are as follows:

RDBMS Dialect block delimiter
DB2 Db2Dialect @
H2 Database Engine 1.2.126 H212126Dialect  
H2 Database H2Dialect  
HSQLDB HsqldbDialect  
Microsoft SQL Server 2008 Mssql2008Dialect GO
Microsoft SQL Server MssqlDialect GO
MySQL MySqlDialect /
Oracle Database OracleDialect /
PostgreSQL PostgresDialect $$
SQLite SqliteDialect  

You can also specify the block delimiter to @Script’s blockDelimiter element:

@Script(blockDelimiter = "GO")
void createProcedure();

The corresponding script file is as follows:

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

CREATE PROCEDURE [dbo].[MY_PROCEDURE]
AS
BEGIN
    SET NOCOUNT ON;
END
GO

Stopping on error

Script running will stop when any statement execution fails. To continue the script running, specify false to the haltOnError element:

@Script(haltOnError = false)
void createTable();

Example

Following script is valid for Oracle Database:

/*
 * table creation statement
 */
create table EMPLOYEE (
  ID numeric(5) primary key,  -- identifier is not generated automatically
  NAME varchar2(20)           -- first name only
);

/*
 * insert statement
 */
insert into EMPLOYEE (ID, NAME) values (1, 'SMITH');

/*
 * procedure creatiton block
 */
create or replace procedure proc
( cur out sys_refcursor,
  employeeid in numeric
) as
begin
  open cur for select * from employee where id > employeeid order by id;
end proc_resultset;
/

/*
 * procedure creation block
 */
create or replace procedure proc2
( cur out sys_refcursor,
  employeeid in numeric
) as
begin
  open cur for select * from employee where id > employeeid order by id;
end proc_resultset;
/

You can use both a single line comment -- and a multi-line comment /* ... */. Each statement must end with a semicolon ;. Be careful that a new line doesn’t mean the end of a statement.

In this example, the slash / is a block delimiter. The block delimiter must appear at the beginning of a line and be followed by a new line.

SQL processor

SQL processors can handle the SQL statements generated from corresponding SQL templates. To mark a DAO method as an SQL processor, annotate the method with @SqlProcessor:

@Config(config = AppConfig.class)
public interface EmployeeDao {
    @SqlProcessor
    <R> R process(Integer id, BiFunction<Config, PreparedSql, R> handler);
    ...
}

Warning

Being aware of SQL injection vulnerabilities is essential. If it’s possible, consider alternative ways other than SQL processors.

Return type

The return type must be the same type as the third type parameter of BiFunction:

@SqlProcessor
String process(Integer id, BiFunction<Config, PreparedSql, String> handler);

If the return type is void, the third type parameter of BiFunction must be Void:

@SqlProcessor
void process(Integer id, BiFunction<Config, PreparedSql, Void> handler);

Parameter

Include a parameter whose type is BiFunction. The BiFunction parameter accepts a configuration and an SQL statement then processes them. Parameters other than the BiFunction parameter are used in the SQL template.

Example

Suppose you want to change the SQL statement generated from an SQL template and execute it:

EmployeeDao dao = ...
dao.process(1, (config, preparedSql) -> {
  String sql = preparedSql.getRawSql();
  String anotherSql = createAnotherSql(sql);
  DataSource dataSource = config.getDataSource()
  Connection connection = dataSource.getConnection();
  PreparedStatement statement = connection.prepareStatement(anotherSql);
  return statement.execute();
});
select * from employee where id = /*^ id */0

Query builders

The package org.seasar.doma.jdbc.builder provides SQL builders.

When it is difficult to build a SQL statement with Queries, consider to use the SQL builders in Default method.

Insert

InsertBuilder builder = InsertBuilder.newInstance(config);
builder.sql("insert into Emp");
builder.sql("(name, salary)");
builder.sql("values (");
builder.param(String.class, "SMITH").sql(", ");
builder.param(BigDecimal.class, new BigDecimal(1000)).sql(")");
builder.execute();

Update

UpdateBuilder builder = UpdateBuilder.newInstance(config);
builder.sql("update Emp");
builder.sql("set");
builder.sql("name = ").param(String.class, "SMIHT").sql(",");
builder.sql("salary = ").param(BigDecimal.class, new BigDecimal("1000"));
builder.sql("where");
builder.sql("id = ").param(int.class, 10);
builder.execute();

Delete

DeleteBuilder builder = DeleteBuilder.newInstance(config);
builder.sql("delete from Emp");
builder.sql("where");
builder.sql("name = ").param(String.class, "SMITH");
builder.sql("and");
builder.sql("salary = ").param(BigDecimal.class, new BigDecimal(1000));
builder.execute();

SQL templates

Overview

Doma supports SQL templates, called “two-way SQL”. “Two-way SQL” means that the SQL templates can be used in two ways:

  • To build dynamic SQL statements from the templates.
  • To execute the templates in SQL tools as they are.

Every SQL template must correspond to a DAO method. For example, suppose you have the pair of an SQL template and a DAO method as follows:

select * from employee where employee_id = /* employeeId */99
Employee selectById(Integer employeeId);

The employeeId expression enclosed between /* and */ corresponds to the method parameter “employeeId” of the DAO. In runtime, the SQL comment and following number /* employeeId */99 is replaced with a bind variable ? and the method parameter “employeeId” is passed to the variable. The SQL statement generated from the SQL template is as follows:

select * from employee where employee_id = ?

The number 99 in the SQL template is test data and never used in runtime. The test data is only useful when you execute the SQL template as is. In other words, you can check whether the SQL template is grammatically correct with your favorite SQL tools.

Each SQL template is represented either a text file or an annotation.

SQL templates in files

You can specify SQL templates in text files:

@Dao
public interface EmployeeDao {
  @Select
  Employee selectById(Integer employeeId);

  @Delete(sqlFile = true)
  int deleteByName(Employee employee);
}

Above selectById and deleteByName methods are mapped onto their own SQL files. DAO methods must be annotated with one of following annotations:

  • @Select
  • @Insert(sqlFile = true)
  • @Update(sqlFile = true)
  • @Delete(sqlFile = true)
  • @BatchInsert(sqlFile = true)
  • @BatchUpdate(sqlFile = true)
  • @BatchDelete(sqlFile = true)

Encoding

The SQL files must be saved as UTF-8 encoded.

Location

The SQL files must be located in directories below a “META-INF” directory which is included in CLASSPATH.

Format of file path

The SQL file path must follow the following format:

META-INF/path-format-of-dao-interface/dao-method.sql

For example, when the DAO interface name is aaa.bbb.EmployeeDao and the DAO method name is selectById, the SQL file path is as follows:

META-INF/aaa/bbb/EmployeeDao/selectById.sql
Dependency on a specific RDBMS

You can specify dependency on a specific RDBMS by file name. To do this, put the hyphen “-” and RDBMS name before the extension “.sql”. For example, the file path specific to PostgreSQL is as follows:

META-INF/aaa/bbb/EmployeeDao/selectById-postgres.sql

The SQL files specific to RDBMSs are given priority. For example, in the environment where PostgreSQL is used, “META-INF/aaa/bbb/EmployeeDao/selectById-postgres.sql” is chosen instead of “META-INF/aaa/bbb/EmployeeDao/selectById.sql”.

The RDBMS names are stem from dialects:

RDBMS Dialect RDBMS Name
DB2 Db2Dialect db2
H2 Database H2Dialect h2
HSQLDB HsqldbDialect hsqldb
Microsoft SQL Server MssqlDialect mssql
MySQL MySqlDialect mysql
Oracle Database OracleDialect oracle
PostgreSQL PostgresDialect postgres
SQLite SqliteDialect sqlite

SQL templates in annotations

You can specify SQL templates to DAO methods with the @Sql annotation:

@Dao
public interface EmployeeDao {
  @Sql("select * from employee where employee_id = /* employeeId */99")
  @Select
  Employee selectById(Integer employeeId);

  @Sql("delete from employee where employee_name = /* employee.employeeName */'aaa'")
  @Delete
  int deleteByName(Employee employee);
}

The @Sql annotation must be combined with following annotations:

  • @Select
  • @Script
  • @Insert
  • @Update
  • @Delete
  • @BatchInsert
  • @BatchUpdate
  • @BatchDelete

Warning

The @Sql annotation is an experimental feature. The full qualified name of @Sql is @org.seasar.doma.experimental.Sql.

Directives

In SQL templates, the SQL comments following the specific rules are recognised as directives. Supported directives are as follows:

Note

See also Expression language for information of the expression language available in directives.

Bind variable directive

Bind variable directive is represented with the format /*...*/. The expression enclosed between /* and */ is evaluated and its evaluation result is passed to bind variable in SQL statement. The directive must be followed by test data, which is never used in runtime.

Basic and domain parameters

The parameter whose type is one of Basic classes and Domain classes is recognised as a bind variable.

The following example is the pair of a DAO method and an SQL template:

Employee selectById(Integer employeeId);
select * from employee where employee_id = /* employeeId */99

The following SQL statement is generated from the SQL template:

select * from employee where employee_id = ?
Parameters in IN clause

The parameter whose type is a subtype of java.lang.Iterable or an array type is recognised as bind variables in IN clause. The type argument of java.lang.Iterable must be one of Basic classes and Domain classes. The directives must be followed by test data enclosed between ( and ).

The following example is the pair of a DAO method and an SQL template:

List<Employee> selectByIdList(List<Integer> employeeIdList);
select * from employee where employee_id in /* employeeIdList */(1,2,3)

In case that the employeeIdList contains five elements, the following SQL statement is generated from the SQL template:

select * from employee where employee_id in (?, ?, ?, ?, ?)

In case that the employeeIdList is empty, the IN clause is replaced with in (null) in runtime:

select * from employee where employee_id in (null)

Literal variable directive

Literal variable directive is represented with the format /*^...*/. The expression enclosed between /*^ and */ is evaluated and its evaluation result is converted to literal format to be embedded in SQL statement. The directive must be followed by test data, which is never used in runtime.

The following example is the pair of a DAO method and an SQL template:

Employee selectByCode(String code);
select * from employee where code = /*^ code */'test'

The DAO method is invoked as follows:

EmployeeDao dao = new EmployeeDaoImpl();
List<Employee> list = dao.selectByCode("abc");

The generated SQL statement is as follows:

select * from employee where code = 'abc'

Note

Literal variable directives are helpful to avoid bind variables and fix SQL plans.

Warning

Literal variable directives do not escape parameters for SQL injection. But the directives reject parameters containing the single quotation '.

Embedded variable directive

Embedded variable directive is represented with the format /*#...*/. The expression enclosed between /*# and */ is evaluated and its evaluation result is embedded in SQL statement.

The following example is the pair of a DAO method and an SQL template:

List<Employee> selectAll(BigDecimal salary, String orderyBy);
select * from employee where salary > /* salary */100 /*# orderBy */

The DAO method is invoked as follows:

EmployeeDao dao = new EmployeeDaoImpl();
BigDecimal salary = new BigDecimal(1000);
String orderBy = "order by salary asc, employee_name";
List<Employee> list = dao.selectAll(salary, orderBy);

The generated SQL statement is as follows:

select * from employee where salary > ? order by salary asc, employee_name

Note

Embedded variable directives are helpful to build SQL fragments such as ORDER BY clause.

Warning

To prevent SQL injection vulnerabilities, embedded variable directives reject parameters containing the following values:

  • a single quotation '
  • a semi colon ;
  • two hyphen --
  • a slash and an asterisk /*

Condition directive

Condition directive allows you to build SQL statements conditionally.

Synopsis
/*%if condition*/
  ...
/*%elseif condition2*/
  ...
/*%elseif condition3*/
  ...
/*%else*/
  ...
/*%end*/

The expressions condition, condition2, and condition3 must be evaluated to either boolean or java.lang.Boolean.

The elseif directives and the else directive are optional.

if

Suppose you have the following SQL template:

select * from employee where
/*%if employeeId != null */
    employee_id = /* employeeId */99
/*%end*/

If the employeeId is not null, the generated SQL statement is as follows:

select * from employee where employee_id = ?

If the employeeId is null, the generated SQL statement is as follows:

select * from employee

The SQL keyword where is removed automatically.

elseif and else

Suppose you have the following SQL template:

select
  *
from
  employee
where
/*%if employeeId != null */
  employee_id = /* employeeId */9999
/*%elseif department_id != null */
  and
  department_id = /* departmentId */99
/*%else*/
  and
  department_id is null
/*%end*/

If the employeeId != null is evaluated true, the generated SQL statement is as follows:

select
  *
from
  employee
where
  employee_id = ?

If the employeeId == null && department_id != null is evaluated true, the generated SQL statement is as follows:

select
  *
from
  employee
where
  department_id = ?

The SQL keyword and followed by department_id is remove automatically:

If the employeeId == null && department_id == null is evaluated true, the generated SQL statement is as follows:

select
  *
from
  employee
where
  department_id is null

The SQL keyword and followed by department_id is remove automatically:

Nested condition directive

You can nest condition directives as follows:

select * from employee where
/*%if employeeId != null */
  employee_id = /* employeeId */99
  /*%if employeeName != null */
    and
    employee_name = /* employeeName */'hoge'
  /*%else*/
    and
    employee_name is null
  /*%end*/
/*%end*/
Removal of clauses on the condition directive

Following clauses can become unnecessary on the condition directive:

  • WHERE
  • HAVING
  • ORDER BY
  • GROUP BY

In the case, they are removed automatically.

Suppose you have the following SQL template:

select * from employee where
/*%if employeeId != null */
    employee_id = /* employeeId */99
/*%end*/

If the employeeId != null is evaluated false, the generated SQL statement is as follows:

select * from employee

Because the SQL clause where followed by /*%if ...*/ is unnecessary, it is removed automatically.

Removal of AND and OR keywords on the condition directives

AND and OR keywords can become unnecessary on the condition directive. In the case, they are removed automatically.

Suppose you have the following SQL template:

select * from employee where
/*%if employeeId != null */
    employee_id = /* employeeId */99
/*%end*/
and employeeName like 's%'

If the employeeId != null is evaluated false, the generated SQL statement is as follows:

select * from employee where employeeName like 's%'

Because the SQL keyword and following /*%end*/ is unnecessary, it is removed automatically.

Restriction on condition directive

/*%if condition*/ and /*%end*/ must be included in same SQL clause and in same statement level.

The following template is invalid, because /*%if condition*/ is in the FROM clause and /*%end*/ is in the WHERE clause:

select * from employee /*%if employeeId != null */
where employee_id = /* employeeId */99 /*%end*/

The following template is invalid, because /*%if condition*/ is in the outer statement and /*%end*/ is in the inner statement:

select * from employee
where employee_id in /*%if departmentId != null */(select ...  /*%end*/ ...)

Loop directive

Loop directive allows you to build SQL statements using loop.

Synopsis
/*%for item : sequence*/
  ...
/*%end*/

The item is the loop variable. The expression sequence must be evaluated to a subtype of java.lang.Iterable or an array type.

In the inside between /*%for item : sequence*/ and /*%end*/, two extra loop variables are available:

item_index:The index (0-based number) of the current item in the loop
item_has_next:Boolean value that tells if the current item is the last in the sequence or not

The prefix item indicates the name of the loop variable.

for and item_has_next

Suppose you have the following SQL template:

select * from employee where
/*%for name : names */
employee_name like /* name */'hoge'
  /*%if name_has_next */
/*# "or" */
  /*%end */
/*%end*/

If the sequence names contains three items, the generated SQL statement is as follows:

select * from employee where
employee_name like ?
or
employee_name like ?
or
employee_name like ?
Removal of clauses on the loop directive

Following clauses can become unnecessary on the loop directive:

  • WHERE
  • HAVING
  • ORDER BY
  • GROUP BY

In the case, they are removed automatically.

Suppose you have the following SQL template:

select * from employee where
/*%for name : names */
employee_name like /* name */'hoge'
  /*%if name_has_next */
/*# "or" */
  /*%end */
/*%end*/

If the sequence names is empty, the generated SQL statement is as follows:

select * from employee

Because the SQL clause where followed by /*%for ...*/ is unnecessary, it is removed automatically.

Removal of AND and OR keywords on the loop directive

AND and OR keywords can become unnecessary on the loop directive. In the case, they are removed automatically.

Suppose you have the following SQL template:

select * from employee where
/*%for name : names */
employee_name like /* name */'hoge'
  /*%if name_has_next */
/*# "or" */
  /*%end */
/*%end*/
or
salary > 1000

If the sequence names is empty, the generated SQL statement is as follows:

select * from employee where salary > 1000

Because the SQL keyword or following /*%end*/ is unnecessary, it is removed automatically.

Restriction on loop directive

/*%for ...*/ and /*%end*/ must be included in same SQL clause and in same statement level.

See also Restriction on condition directive.

Expansion directive

Expansion directive allows you to build column list of SELECT clause from the definition of Entity classes.

Synopsis
/*%expand alias*/

The expression alias is optional. If it is specified, it must be evaluated to java.lang.String.

The directive must be followed by the asterisk *.

expand

Suppose you have the following SQL template and the entity class mapped to the template:

select /*%expand*/* from employee
@Entity
public class Employee {
    Integer id;
    String name;
    Integer age;
}

The generated SQL statement is as follows:

select id, name, age from employee

If you specify an alias to the table, specify same alias to the expansion directive:

select /*%expand "e" */* from employee e

The generated SQL statement is as follows:

select e.id, e.name, e.age from employee e

Population directive

Population directive allows you to build column list of UPDATE SET clause from the definition of Entity classes.

Synopsis
/*%populate*/
populate

Suppose you have the following SQL template and the entity class mapped to the template:

update employee set /*%populate*/ id = id where age < 30
@Entity
public class Employee {
    Integer id;
    String name;
    Integer age;
}

The generated SQL statement is as follows:

update employee set id = ?, name = ?, age = ? where age < 30

Comments

This section show you how to distinguish between directives and normal SQL comments.

Single line comment

Always the string consisting of two hyphens -- is a single line comment. It is never directive.

Multi line comment

If the character following /* is not permissible as the first character in a Java identifier and it is neither %, #, @, " nor ', the /* is beginning of a multi line comment.

The followings are the beginning of a multi line comment:

  • /**…*/
  • /*+…*/
  • /*=…*/
  • /*:…*/
  • /*;…*/
  • /*(…*/
  • /*)…*/
  • /*&…*/

In other hand, the followings are the beginning of a directive:

  • /* …*/
  • /*a…*/
  • /*$…*/
  • /*@…*/
  • /*”…*/
  • /*’…*/
  • /*#…*/
  • /*%…*/

Note

We recommend you always use /**...*/ to begin multi line comments because it is simple.

Expression language

You can write simple expressions in directives of SQL templates. The grammar is almost the same as Java. However, not everything is possible that Java can do.

Note

Especially, the big difference is how to use optional types like java.util.Optional. In the expression, a value of Optional type is always converted to a value of the element type automatically. For example a value of the Optional<String> type is treated as a value of String type. Therefore, we can’t call methods of Optional type, nor do we call methods which have an Optional type in the parameters.

When you want to check existence of a value, use /*%if optional != null */ instead of /*%if optional.isPresent() */.

The same is true for java.util.OptionalInt, java.util.OptionalDouble, and java.util.OptionalLong.

Literals

You can use the following literals:

Literal Type
null void
true boolean
false boolean
10 int
10L long
0.123F float
0.123D double
0.123B java.math.BigDecimal
‘a’ char
“a” java.lang.String

The numeral types are distinguished by suffix letters such as L or F at the end of the literals. The suffixes must be capital letters.

select * from employee where
/*%if employeeName != null && employeeName.length() > 10 */
    employee_name = /* employeeName */'smith'
/*%end*/

Comparison operators

You can use the following comparison operators:

Operator Description
== Equal to operator
!= Not equal to operator
< Less than operator
<= Less than or equal to operator
> Greater than operator
>= Greater than or equal to operator

To use comparison operators, operands must implement java.lang.Comparable.

The operands for <, <=, > and >= must not be null.

select * from employee where
/*%if employeeName.indexOf("s") > -1 */
    employee_name = /* employeeName */'smith'
/*%end*/

Logical operators

You can use the following logical operators:

Operator Description
! Logical complement operator
&& Conditional-AND operator
|| Conditional-OR operator

With parentheses, you can override the precedence of operators.

select * from employee where
/*%if (departmentId == null || managerId == null) and employee_name != null */
    employee_name = /* employeeName */'smith'
/*%end*/

Arithmetic operators

You can use the following arithmetic operators:

Operator Description
+ Additive operator
- Subtraction operator
* Multiplication operator
/ Division operator
% Remainder operator

Operands must be numeric type.

select * from employee where
    salary = /* salary + 1000 */0

String concatenation operator

You can concatenate characters using a concatenation operator +.

The operand must be one of the following types:

  • java.lang.String
  • java.lang.Character
  • char
select * from employee where
   employee_name like /* employeeName + "_" */'smith'

Calling instance methods

You can call instance methods with the method names separated by dots .. The method visibility must be public.

select * from employee where
/*%if employeeName.startsWith("s") */
    employee_name = /* employeeName */'smith'
/*%end*/

If the method has no argument, specify () after the method name.

select * from employee where
/*%if employeeName.length() > 10 */
    employee_name = /* employeeName */'smith'
/*%end*/

Accessing to instance fields

You can access instance fields with the field names separated by dots .. Even if the visibility is private, you can access it.

select * from employee where
    employee_name = /* employee.employeeName */'smith'

Calling static methods

You can call static methods by continuing the method names with the fully qualified class names enclosed in @. The method visibility must be public.

select * from employee where
/*%if @java.util.regex.Pattern@matches("^[a-z]*$", employeeName) */
    employee_name = /* employeeName */'smith'
/*%end*/

Accessing to static fields

You can access static fields by continuing the field name with the fully qualified class name enclosed in @. Even if the visibility is private, you can access it.

select * from employee where
/*%if employeeName.length() < @java.lang.Byte@MAX_VALUE */
  employee_name = /* employeeName */'smith'
/*%end*/

Using built-in functions

Built-in functions are utilities mainly for changing values of binding variables before binding them to SQL.

For example, when you run a prefix search with a LIKE clause, you can write like this:

select * from employee where
    employee_name like /* @prefix(employee.employeeName) */'smith' escape '$'

@prefix(employee.employeeName) means that we pass employee.employeeName to the @prefix function. The @prefix function converts the character sequence which is received by the parameter to a string for forward match search. It also escapes special characters. For example, if the value of employee.employeeName is ABC, it’s converted to ABC%. If the value of employee.employeeName contains % such as AB%C, the % is escaped with a default escape sequence $, therefore the value is converted to AB$%C%.

You can use following function signatures:

String @escape(CharSequence text, char escapeChar = ‘$’)
Escapes the character sequence for LIKE operation. The return value is a string which is a result of escaping the character sequence. If escapeChar isn’t specified, $ is used as a default escape sequence. It returns null if you pass null as a parameter.
String @prefix(CharSequence prefix, char escapeChar = ‘$’)
Converts the character sequence for prefix search. The return value is a string which is a result of escaping the character sequence and adding a wild card character at the end. If escapeChar isn’t specified, $ is used as a default escape sequence. It returns null if you pass null as a parameter.
String @infix(CharSequence infix, char escapeChar = ‘$’)
Converts the character sequence for infix search. The return value is a string which is a result of escaping the character sequence and adding wild card characters at the beginning and the end. If escapeChar isn’t specified, $ is used as a default escape sequence. It returns null if you pass null as a parameter.
String @suffix(CharSequence suffix, char escapeChar = ‘$’)
Converts the character sequence for suffix search. The return value is a string which is a result of escaping the character sequence and adding a wild card character at the beginning. If escapeChar isn’t specified, $ is used as a default escape sequence. It returns null if you pass null as a parameter.
java.util.Date @roundDownTimePart(java.util.Date date)
Rounds down the time part. The return value is a new Date which is rounded down the time part. It returns null if you pass null as a parameter.
java.sql.Date @roundDownTimePart(java.sql.Date date)
Rounds down the time part. The return value is a new Date which is rounded down the time part. It returns null if you pass null as a parameter.
java.sql.Timestamp @roundDownTimePart(java.sql.Timestamp timestamp)
Rounds down the time part. The return value is a new Timestamp which is rounded down the time part. It returns null if you pass null as a parameter.
java.util.Date @roundUpTimePart(java.util.Date date)
Rounds up the time part. The return value is a new Date which is rounded up the time part. It returns null if you pass null as a parameter.
java.sql.Date @roundUpTimePart(java.sql.Date date)
Rounds up the time part. The return value is a new Date which is rounded up the time part. It returns null if you pass null as a parameter.
java.sql.Timestamp @roundUpTimePart(java.sql.Timestamp timestamp)
Rounds up the time part. The return value is a new Timestamp which is rounded up the time part. It returns null if you pass null as a parameter.
boolean @isEmpty(CharSequence charSequence)
Returns true if the character sequence is null or the length is 0.
boolean @isNotEmpty(CharSequence charSequence)
Returns true if the character sequence isn’t null and the length isn’t 0.
boolean @isBlank(CharSequence charSequence)
Returns true only if the character sequence is null, the length is 0, or the sequence is formed with whitespaces only.
boolean @isNotBlank(CharSequence charSequence)
Returns true if the character sequence isn’t null, the length isn’t 0, and the sequence isn’t formed with whitespaces only.

These functions are correspond to the methods of org.seasar.doma.expr.ExpressionFunctions.

Using custom functions

You can define and use your own functions.

You need to follow these settings when you use custom functions which you define by yourself:

  • The function is defined as a method of a class which implements org.seasar.doma.expr.ExpressionFunctions.
  • The method is a public instance method.
  • The class is registered as an option in Annotation processing. The key of the option is doma.expr.functions.
  • The instance of the class you create is used in an RDBMS dialect in your configuration class (The implementations of RDBMS dialect provided by Doma can receive ExpressionFunctions in the constructor).

To call a custom function, add @ at the beginning of the function name like built-in functions. For example, you can call myfunc function like this:

select * from employee where
    employee_name = /* @myfunc(employee.employeeName) */'smith'

Transaction

Doma supports local transaction. This document explains how to configure and use the local transaction.

If you want to use global transaction, use frameworks or application servers which support JTA (Java Transaction API).

Configuration

To use local transaction, these conditions are required:

  • Return LocalTransactionDataSource from getDataSource in Config
  • Generate LocalTransactionManager using the LocalTransactionDataSource above in the constructor
  • Use the LocalTransactionManager above to control database access

There are several ways to generate and get the LocalTransactionManager, but the simplest way is to generate it in the constructor of Config implementaion class and make the Config implementaiton class singleton.

Here is an example:

@SingletonConfig
public class AppConfig implements Config {

    private static final AppConfig CONFIG = new AppConfig();

    private final Dialect dialect;

    private final LocalTransactionDataSource dataSource;

    private final TransactionManager transactionManager;

    private AppConfig() {
        dialect = new H2Dialect();
        dataSource = new LocalTransactionDataSource(
                "jdbc:h2:mem:tutorial;DB_CLOSE_DELAY=-1", "sa", null);
        transactionManager = new LocalTransactionManager(
                dataSource.getLocalTransaction(getJdbcLogger()));
    }

    @Override
    public Dialect getDialect() {
        return dialect;
    }

    @Override
    public DataSource getDataSource() {
        return dataSource;
    }

    @Override
    public TransactionManager getTransactionManager() {
        return transactionManager;
    }

    public static AppConfig singleton() {
        return CONFIG;
    }
}

Note

The @SingletonConfig shows that this class is a singleton class.

Usage

Let’s see examples on the condition that we use the following Dao interface annotated with the AppConfig class which we saw in the Configuration.

@Dao(config = AppConfig.class)
public interface EmployeeDao {
    ...
}

The dao used in the code examples below are instances of this class.

Start and finish transactions

You can start a transaction with one of following methods of TransactionManager:

  • required
  • requiresNew
  • notSupported

Use a lambda expression to write a process which you want to run in a transaction.

TransactionManager tm = AppConfig.singleton().getTransactionManager();

tm.required(() -> {
    Employee employee = dao.selectById(1);
    employee.setName("hoge");
    employee.setJobType(JobType.PRESIDENT);
    dao.update(employee);
});

The transaction is committed if the lambda expression finishes successfully. The transaction is rolled back if the lambda expression throws an exception.

Explicit rollback

Besides throwing an exception, you can use setRollbackOnly method to rollback a transaction.

TransactionManager tm = AppConfig.singleton().getTransactionManager();

tm.required(() -> {
    Employee employee = dao.selectById(1);
    employee.setName("hoge");
    employee.setJobType(JobType.PRESIDENT);
    dao.update(employee);
    // Mark as rollback
    tm.setRollbackOnly();
});

Savepoint

With a savepoint, you can cancel specific changes in a transaction.

TransactionManager tm = AppConfig.singleton().getTransactionManager();

tm.required(() -> {
    // Search and update
    Employee employee = dao.selectById(1);
    employee.setName("hoge");
    dao.update(employee);

    // Create a savepoint
    tm.setSavepoint("beforeDelete");

    // Delete
    dao.delete(employee);

    // Rollback to the savepoint (cancel the deletion above)
    tm.rollback("beforeDelete");
});

Building an application

Before You Start

Maven Central Repository

You can pull the jar file of the Doma framework from the Maven central repository. The group id and artifact id are as follows:

GroupId:org.seasar.doma
ArtifactId:doma

See also: https://search.maven.org/artifact/org.seasar.doma/doma/

Build with Eclipse

Note

Instead of manual settings we show you below, we recommend to generate eclipse setting files automatically with the Gradle Eclipse Plugin. See also build.gradle and eclipse.gradle in the domaframework/simple-boilerplate repository.

Enabling annotation processing

  • Select “Project > Properties” from the menu bar and open the dialog
  • Select “Java Compiler > Annotation Processing” from the left menu of the dialog
  • Check as follows:
_images/annotation-processing.png

Setting factory path

  • Select “Project > Properties” from the menu bar and open the dialog
  • Select “Java Compiler > Annotation Processing > Factory Path” from the left menu of the dialog
  • Add the jar file of the Doma framework whose version is same as the one in the Java Build Path
  • Check as follows:
_images/factory-path.png

Build with Gradle

build.gradle as an example:

apply plugin: 'java'

// Copy the resources referred by the Doma annotation processors to
// the destinationDir of the compileJava task
task copyDomaResources(type: Sync)  {
    from sourceSets.main.resources.srcDirs
    into compileJava.destinationDir
    include 'doma.compile.config'
    include 'META-INF/**/*.sql'
    include 'META-INF/**/*.script'
}

compileJava {
    // Depend on the above task
    dependsOn copyDomaResources
    options.encoding = 'UTF-8'
}

compileTestJava {
    options.encoding = 'UTF-8'
    // Disable the annotation processors during the test run
    options.compilerArgs = ['-proc:none']
}

dependencies {
    annotationProcessor "org.seasar.doma:doma:2.27.1"
    implementation "org.seasar.doma:doma:2.27.1"
}

repositories {
    mavenCentral()
    maven {url 'https://oss.sonatype.org/content/repositories/snapshots/'}
}

Note

The description maven {url 'https://oss.sonatype.org/content/repositories/snapshots/'} is required only when you need the SNAPSHOT version of the Doma framework.

Note

With the above build.gradle, you will benefits from Incremental annotation processing.

See also build.gradle in the domaframework/simple-boilerplate repository.

Annotation processing

Doma uses Pluggable Annotation Processing API at compile time.

In this document, we describe the options for the annotation processors in Doma and show you how to pass them to build tools.

Options

doma.dao.package
The package that the generated implementation classes of interfaces annotated with @Dao belong to. The specified value overrides the value of doma.dao.subpackage. The default value is the same package as the one the interfaces annotated with @Dao belong to.
doma.dao.subpackage
The subpackage that the generated implementation classes of interfaces annotated with @Dao belong to. The specified value is overridden by the value of doma.dao.package. If this value is impl and the package of interfaces annotated with @Dao is example.dao, the generated implementation classes belong to the package example.dao.impl.
doma.dao.suffix
The name suffix that the generated implementation classes of interfaces annotated with @Dao have. If this value is Bean and the simple name of the interface annotated with @Dao is EmployeeDao, the simple name of the generated implementation class is EmployeeDaoBean. The default value is Impl.
doma.debug
Whether to output the debug log in annotation processing. If the value is true, the annotation processors output the debug log. The default value is false.
doma.domain.converters
The full qualified names of the classes annotated with @DomainConverters. The names are described as comma separated list. This value are used to find external domain classes.
doma.entity.field.prefix
The name prefix that the fields of the generated entity meta classes have. The value none means the prefix is not used. The default value is $.
doma.expr.functions
The full qualified name of the class that implements org.seasar.doma.expr.ExpressionFunctions. The default value is org.seasar.doma.expr.ExpressionFunctions. This value are used to determine which functions are available in expression comments.
doma.resources.dir
The resource directory that contains the resource files such as a doma.compile.config file and sql files. Specify the value as an absolute path. If the value is not specified, the resource directory is same as the directory the classes are generated.
doma.sql.validation
Whether to validate the existence of sql files and the grammar of sql comments. If the value is true, the validations run. To disable the validations, set false. The default value is true.
doma.version.validation
Whether to validate the versions of doma.jar between runtime and compile-time. If the value is true, the validation runs. To disable the validation, set false. The default value is true.
doma.config.path
The file path of the configuration file for Doma. The default value is doma.compile.config.

Setting options in Eclipse

  • Select “Project > Properties” from the menu bar and open the dialog
  • Select “Java Compiler > Annotation Processing” from the left menu of the dialog
  • Add “Processor options”

Setting options in IntelliJ IDEA

  • Select “Preferrences” from the menu bar and open the dialog
  • Select “Build, Execution, Deployment > Compiler > Annotation Processors” from the left menu of the dialog
  • Add “Annotation Processor options”

Setting options in Gradle

compileJava {
    options {
        compilerArgs = ['-Adoma.dao.subpackage=impl', '-Adoma.dao.suffix=Impl']
    }
}

Setting options with configuration file

The options specified in the doma.compile.config file are available in all build tools such as Eclipse, IDEA, Gradle and so on.

The doma.compile.config file must follow the properties file format and be placed in the root directory such as src/main/resources.

Note

The options specified in the doma.compile.config file are overridden by the ones specific to the build tools.

Lombok support

Doma supports Lombok 1.16.12 or above.

Note

If you intend to use Eclipse, use version 4.5 or above.

Overview

Both Lombok and Doma provide annotation processors. Because the execution order of annotation processors is not determined in Java, Doma’s annotation processors are not always aware of the class modifications made by Lombok annotation processors.

To resolve the issue, Doma’s annotation processors recognize several of Lombok’s annotations and change their behavior. For example, if Doma’s annotation processors find a class annotated with @lombok.Value, they suppose that the class has a constructor whose arguments cover all instance fields.

Best practices

We show you recommended ways to define classes with Lombok annotations.

Entity class definition

immutable entity classes
  • Specify true to the immutable element of @Entity
  • Specify either @lombok.Value or @lombok.AllArgsConstructor
  • Specify @lombok.Getter to generate getters, in case of @lombok.AllArgsConstructor
@Entity(immutable = true)
@Value
public class Employee {
  @Id
  Integer id;
  String name;
  Age age;
}
@Entity(immutable = true)
@AllArgsConstructor
@Getter
public class Worker {
  @Id
  private final Integer id;
  private final String name;
  private final Age age;
}
mutable entity classes
  • Define a default constructor
  • Specify @lombok.Data or @lombok.Getter/@lombok.Setter to generate getters/setters
@Entity
@Data
public class Person {
  @Id
  private Integer id;
  private String name;
  private Age age;
}
@Entity
@Getter
@Setter
public class Businessman {
  @Id
  private Integer id;
  private String name;
  private Age age;
}

Domain class definition

  • Specify @lombok.Value
  • Define only one instance field whose name is value
@Domain(valueType = Integer.class)
@Value
public class Age {
  Integer value;
}

Embeddable class definition

  • Specify either @lombok.Value or @lombok.AllArgsConstructor
  • Specify @lombok.Getter to generate getters, in case of @lombok.AllArgsConstructor
@Embeddable
@Value
public class Address {
  String street;
  String city;
}
@Embeddable
@AllArgsConstructor
@Getter
public class Location {
  private final String street;
  private final String city;
}

Kotlin support

Doma supports Kotlin 1.3.11 or above experimentally.

Best practices

We show you recommended ways to define classes and build them with Kotlin.

Entity classes

  • Define as a data class
  • Specify true to the immutable element of @Entity
  • Define only one constructor
  • Define properties only in the constructor
  • Use val for the property definitions
@Entity(immutable = true)
data class Person(
      @Id
      @GeneratedValue(strategy = GenerationType.IDENTITY)
      val id: Int? = null,
      val name: Name,
      val address: Address)

Domain classes

  • Define as a data class
  • Define only one constructor
  • Define only one property whose name is value in the constructor
  • Use val for the property definition
@Domain(valueType = String::class)
data class Name(val value: String)

Embeddable classes

  • Define as a data class
  • Define only one constructor
  • Define properties only in the constructor
  • Use val for the property definitions
@Embeddable
data class Address(val city: String, val street: String)

Dao interfaces

  • Specify a SQL template to @org.seasar.doma.experimental.Sql
  • Use org.seasar.doma.jdbc.Result as the return type of @Delete, @Insert and @Update
  • Use org.seasar.doma.jdbc.BatchResult as the return type of @BatchDelete, @BatchInsert and @BatchUpdate
@Dao(config = AppConfig::class)
interface PersonDao {
  @Sql("""
  select * from person where id = /*id*/0
  """)
  @Select
  fun selectById(id: Int): Person

  @Insert
  Result<Person> insert(Person person);
}
val dao: PersonDao = ...
val person = Person(name = Name("Jhon"), address = Address(city = "Tokyo", street = "Yaesu"))
val (newPerson, count) = dao.insert(person)

Using kapt in Gradle

Annotation processors are supported in Kotlin with the kapt compiler plugin.

Add the dependencies using the kapt and implementation configuration in your dependencies block:

dependencies {
    kapt "org.seasar.doma:doma:2.27.1"
    implementation "org.seasar.doma:doma:2.27.1"
}

If you use resource files such as SQL files, make the kapt find them:

kapt {
    arguments {
        arg("doma.resources.dir", compileKotlin.destinationDir)
    }
}

task copyDomaResources(type: Sync)  {
    from sourceSets.main.resources.srcDirs
    into compileKotlin.destinationDir
    include 'doma.compile.config'
    include 'META-INF/**/*.sql'
    include 'META-INF/**/*.script'
}

compileKotlin {
    dependsOn copyDomaResources
}

Note

Remember that you always have options as follows:

  • Write all code in Kotlin
  • Write all code in Java
  • Write code annotated with Doma’s annotations in Java and others in Kotlin

The third option is worth considering, because it can avoid some troubles with the kapt.

Developer Documentation

Development of Doma

Note

This document is written for the developers of Doma.

Before You Start

To build you will need Git and JDK 8. Be sure that your JAVA_HOME environment variable points to the jdk1.8.0 folder extracted from the JDK download.

Get the Source Code

$ git clone https://github.com/domaframework/doma.git
$ cd doma

Build from the Command Line

$ ./gradlew build

Format the Source Code

We use google-java-format 1.6 for code formatting.

Command Line

Use the Spotless gradle plugin:

$ ./gradlew spotlessApply

Continuous Integration

We use Travis CI for CI. All pull requests to master brunch are tested on Travis CI.

Documents

We use Sphinx to generate documents. To use Sphinx you will need Python.

Set up an environment

$ cd docs
$ pip install -r requirements.txt

Generate HTML files

Execute the sphinx-autobuild command in the docs directory:

$ sphinx-autobuild . _build/html

Visit the webpage served at http://127.0.0.1:8000.

About Doma

Release notes

v2.27.1: 2020-02-07

  • GH341 Fix broken link in document
  • GH340 Fixed that schema name was not used

v2.27.0: 2020-01-25

  • GH338 Reuse CharBuffer instances
  • GH332 Needs default constructor to AbstractDao to use quarkus

v2.26.0: 2019-12-29

  • GH330 Clarify support for Java 13
  • GH329 Replace table aliases in the ORDER BY clause

v2.25.1: 2019-08-25

  • GH324 Fix null dereference

v2.25.0: 2019-08-25

  • GH322 Describe settings for IntelliJ IDEA Community Edition 2019.2.1
  • GH321 Support domain mapping using ExternalDomain of Java Array
  • GH314 Upgrade google-java-format to 1.7
  • GH313 Upgrade spotless-plugin-gradle to 3.20.0
  • GH312 Clarify support for Java 12
  • GH311 Introduce wrapper classes dedicated to primitive types
  • GH310 Upgrade JUnit to 5.4.0

v2.24.0: 2019-02-23

  • GH308 Implement toString methods to make debug easy
  • GH307 Polish the org.seasar.doma.internal package
  • GH306 Remove Japanese comments
  • GH305 Register the option “doma.config.path” to suppress a warning message
  • GH304 Simplify the DefaultPropertyType class
  • GH303 Simplify generators
  • GH301 Fix compile error in procedure and function methods
  • GH300 Support bound type parameters in domain classes
  • GH299 Migrate from JUnit 3 to JUnit 5
  • GH298 Polish build script

v2.23.0: 2019-02-03

  • GH294 Support array types for bind and literal variable directives
  • GH276 Support array types for loop directives
  • GH286 Support automatic removal of unnecessary ORDER BY and GROUP BY clauses
  • GH295 Polish messages
  • GH284 Preserve the auto-commit state of a JDBC connection
  • GH283 Run tests in each annotation processor instance
  • GH296 GH293 GH292 GH291 GH290 GH289 GH288 GH287 GH285 GH282 Polish documents

v2.22.0: 2019-01-20

v2.21.0: 2019-01-06

  • GH267 Change the description of Doma
  • GH266 Remove descriptions of Doma 3
  • GH265 Polish annotation processing
  • GH264 Improve build script example
  • GH263 Translate resource bundle messages from Japanese into English
  • GH262 Translate javadoc comments from Japanese into English
  • GH261 Change the sphinx langulage option
  • GH260 Translate development.rst
  • GH259 Format with google-java-format 1.6
  • GH258 Translate docs from Japanese into English
  • GH257 Use Sphinx 1.8.2

v2.20.0: 2018-12-23

  • GH255 Support Gradle incremental annotation processing
  • GH254 Specify supported Java versions
  • GH253 Explain how to write gradle build script in Gradle 5.0
  • GH252 Polish Gradle configuration
  • GH251 Use Gradle 5.0
  • GH248 Fix mistake in abbreviation for JRE
  • GH247 Load a class with Class.forName when the context class loader fails to load the class
  • GH245 Revise wrong descriptions about @Update.excludeNull, @BatchUpdate.include and @BatchUpdate.exclude

v2.19.3: 2018-09-02

  • GH242 Remove wrapper task
  • GH241 Upgrade Gradle version
  • GH240 Refactor gradle script for eclipse
  • GH239 Resolve an external domain class by traversing class hierarchy
  • GH225 Add documents for IntelliJ IDEA
  • GH223 Fix Javadoc comment for Update#includeUnchanged

v2.19.2: 2018-03-11

  • GH220 Fix broken local transaction

v2.19.1: 2018-01-08

  • GH216 Document that the auto-generated value is set only if the identity field is either null or less than 0
  • GH215 Fix TypeDeclaration#removeOverriddenMethodDeclarations to prevent IllegalStateException

v2.19.0: 2017-11-19

  • GH211 Use CharSequence as parameter in expression functions
  • GH210 Support private methods in DAO classes

v2.18.0: 2017-10-28

  • GH208 Use javax.annotation.processing.Generated on JDK 9
  • GH207 If the result of the expression is Long, it is evaluated as Float.
  • GH206 Fix a wrong annotation parameter in a Docs example.
  • GH205 Remove full-width and _ from wild cards in Oracle dialect

v2.17.0: 2017-09-09

  • GH203 Add @TenantId to support the partitioned approach of multi-tenancy
  • GH202 Update url
  • GH200 Fix typo
  • GH199 Fix the use of the wrong class java.lang.reflect.Modifier

v2.16.1: 2017-05-14

  • GH196 Kotlin 1.1.2 を実験的にサポート
  • GH195 Oracle 12c 以降でIDENTITYを使った識別子の自動生成をサポート
  • GH194 SelectBuilder に params メソッドと literals メソッドを追加

v2.16.0: 2017-02-19

  • GH191 設定ファイルから注釈処理のオプションを読み込めるように

v2.15.0: 2017-02-05

  • GH184 Long name serial sequence
  • GH187 Eclipse 4.6.2 で検出されたワーニングを修正
  • GH188 Dao においてデフォルトメソッドのみが定義されたインタフェースを継承可能に変更

v2.14.0: 2017-01-14

  • GH183 BatchUpdateExecutor, BatchDeleteExecutor, MapBatchInsertBuilder を追加
  • GH182 エンベッダブルクラスにプロパティを定義しない場合に生成されるコードがコンパイルエラーになっていたのを修正
  • GH181 SQLテンプレートで組み立てられたSQLを扱うための @SqlProcessor を追加
  • GH180 Lombok をサポート
  • GH179 StandardExpressionFunctions#escapeがescapeCharを使用していない
  • GH177 Kotlin 1.0.6対応
  • GH176 BatchInsertExecutorを追加
  • GH175 組み込み関数の LocalDate, LocalDateTime 対応
  • GH174 Mapをパラメータとして自動でInsert文を組み立てるMapInsertBuilderを追加

v2.13.0: 2016-11-13

  • GH170 識別子をプリミティブ型にする場合の注意点を記載
  • GH167 Doma 2 における主要な変更点に記載されたクラス名の間違いを修正

v2.12.1: 2016-08-06

  • GH165 エンティティクラスの継承構造が2段階の場合に無限ループが発生する不具合を修正

v2.12.0: 2016-07-14

  • GH161 SQLファイルのキャッシュを削除するためのメソッドを追加
  • GH160 エンベッダブルクラスをネストした型として定義することをサポート
  • GH159 エンティティクラスをネストした型として定義することをサポート
  • GH158 ドキュメントのコピーライトの日付を更新
  • GH156 SQLファイルの存在チェックでパスの大文字小文字を区別するように修正
  • GH155 fix typo

v2.11.0: 2016-06-18

  • GH153 クエリビルダでリテラルの埋め込みをサポート
  • GH151 リテラル変数コメントの直後のテスト用リテラルに対するチェックを修正
  • GH150 リテラル変数コメントの機能を追加

v2.10.0: 2016-05-28

  • GH146 Embeddable なオブジェクトが null の場合に更新系の処理が失敗する不具合を修正
  • GH145 Kotlin のサポートについてドキュメントを追加
  • GH142 エンベッダブルクラスのドキュメントを追加
  • GH141 エンティティクラスが継承をしている場合の親プロパティへのアクセス方法を簡易化
  • GH140 プリミティブ型のプロパティにnullがアサインされる場合に例外が発生していた不具合をデフォルト値が設定されるように修正
  • GH139 @Embeddable をサポート
  • GH138 Kotlin でイミュータブルなエンティティを定義する際 @ParameterName を使用する必要性を除去

v2.9.0: 2016-05-16

  • GH136 更新結果を表すクラスで Kotlin の Destructuring Declarations に対応
  • GH135 注釈処理で出力するメッセージに、クラス、メソッド、フィード名など出力元の情報を追加
  • GH134 @Dao に Singleton をフィールドで提供するタイプの Config を指定することをサポート
  • GH133 kapt 1.0.1-2の実験的なサポート
  • GH132 Switching remote URLs from SSH to HTTPS
  • GH131 無名パッケージに配置したクラスの注釈処理に失敗する不具合を修正
  • GH130 テストコードの改善

v2.8.0: 2016-04-16

  • GH127 ドメインクラスをネストした型として定義することをサポート

v2.7.0: 2016-02-27

  • GH125 SelectBuilder のデフォルトの FetchType を Lazy に設定
  • GH124 間違った警告メッセージを修正
  • GH122 検索用メソッドの戻り値の型を Stream とすることを認めた
  • GH121 includeの説明が間違っていたのを修正

v2.6.2: 2016-02-11

  • GH118 SQLコメント カラムリスト展開コメント リンク修正
  • GH117 リンクに関連したJavadoc生成オプションを追加しました
  • GH116 クエリビルダのgetSql()の呼び出しでエラーが発生しないように修正
  • GH115 Spring Boot DevToolsに対応

v2.6.1: 2016-01-11

  • GH111 Revert “注釈処理で生成されるコードが冗長なジェネリクスを含む問題を修正”

v2.6.0: 2015-11-21

  • GH107 注釈処理で生成されるコードが冗長な型引数を含む問題を修正
  • GH105 Fix cause position in UniqueConstraintException’s constructor

v2.5.1: 2015-11-01

  • GH102 UnknownColumnHandler の handle() を空実装にすると NullPointerException が発生する問題を修正

v2.5.0: 2015-10-10

  • GH99 バッチ更新においてパフォーマンスが悪くなる問題をデータベースのIDENTITYを事前に予約することで解決

v2.4.1: 2015-09-12

  • GH96 埋め込み変数コメントの展開後にスペースを挿入しない

v2.4.0: 2015-08-14

  • GH93 JdbcMappingHint#getDomainClass() がドメインクラスを返さない問題を修正
  • GH89 PortableObjectTypeをジェネリクスにして、String等をvalueTypeとして指定できるように
  • GH88 JdbcLoggerのメソッドのtypoを修正。 Failuer -> Failure
  • GH87 StandardExpressionFunctionsのサブクラスにpublicなコンストラクタを追加
  • GH86 Version number spec is different from the document
  • GH84 populate を使ったメソッドで DOMA4122 が出る問題を修正
  • GH81 リソースバンドルが取得できない場合はデフォルトのリソースバンドルにフォールバックする

v2.3.1: 2015-05-30

  • GH79 SQLファイルを使った更新がスキップされる問題を修正

v2.3.0: 2015-05-23

  • GH75 SQLファイルでUPDATE文のSET句を自動生成
  • GH74 PostgresDialectでID生成エラーが発生する問題を修正

v2.2.0: 2015-03-28

  • GH71 インターフェースにも@Domainで注釈できるようにしました
  • GH70 EntityListenerの取得はEntityListenerProviderを介するようにしました
  • GH67 SQL Server の OPTION 句が存在するとページングが正しく実行されない問題を修正しました
  • GH66 ネーミング規約の適用をコンパイル時から実行時に変更
  • GH64 イミュータブルなエンティティの取得でNullPointerException が発生するバグを修正しました
  • GH61 SQL Server 2012 から追加された OFFSET-FETCH をページング処理に使う
  • GH60 Mssql2008Dialect の getName() が返す値を変更しました
  • GH59 Windows環境でテストが失敗する問題を修正
  • GH58 StringUtilのfromCamelCaseToSnakeCaseで、カラム名に数字が含まれている場合意図している結果にならない

v2.1.0: 2014-12-30

  • GH51 LocalTransactionManager#notSupported()が新規のトランザクションを作成する不具合を修正しました
  • GH50 SQLコメントを使ってSQLに識別子を追記できるようにしました
  • GH49 Gradleのプロジェクトの名前を”doma”に設定しました
  • GH48 /*%expand*/ の間のスペースはaliasとみなさないようにしました

v2.0.1: 2014-08-06

  • DomainConverter の第2型引数に byte[] を指定すると注釈処理でコンパイル エラーになる問題を修正しました

v2.0.0: 2014-07-02

  • UnitOfWork を削除しました

v2.0-beta-5: 2014-06-07

  • List<Optional<Emp>>List<Optional<Map<String, Object>>> を戻り値とする Dao メソッドは注釈処理でコンパイルエラーにしました
  • Entity 更新後に OriginalStates へ変更が反映されない問題を修正しました
  • エンティティの識別子の値がすでに設定されている場合は自動生成処理を実行しないようにしました
  • カラムリスト展開コメント で DOMA4257 エラーになる問題を修正しました
  • SQLのログ出力方法をアノテーションで制御できるようにしました
  • Dao から出力されるログのメッセージを詳細化しました
  • UtilLoggingJdbcLogger のロガーの名前をクラスの完全修飾名に変更しました
  • SQL実行時にSQLファイルのパスがログに出力されない問題を修正しました

v2.0-beta-4: 2014-05-04

  • Pluggable Annotation Processing API の Visitor を Java 8 用のものへバージョンアップしました
  • 空の java.util.Iterable を IN 句にバインドする場合は SQL の null として扱うようにしました
  • java.sql.SQLXML に対応しました
  • LocalTransaction で指定したセーブポイント「以降」を削除すべき箇所で「以前」を削除している不具合を修正しました
  • LocalTransaction でセーブポイント削除時のログが間違っている不具合を修正しました
  • Entity のプロパティの型を byte 配列にすると注釈処理に失敗する不具合を修正しました

v2.0-beta-3: 2014-04-03

  • 検索結果を java.util.stream.Collector で処理できるようにしました。
  • LocalTransactionManager から TransactionManager インタフェースを抽出しました。
  • Config で指定した設定が一部無視される不具合を修正しました。
  • マップのネーミング規約を一律制御するためのインタフェース MapKeyNaming を追加しました。
  • java.time.LocalDatejava.time.LocalTimejava.time.LocalDateTime を基本型として使用できるようにしました。
  • JdbcLogger の実装の差し替えを容易にするために AbstractJdbcLogger を追加しました。
  • SelectStrategyType の名前を SelectType に変更しました。

Frequently Asked Questions

General

What does “Doma” mean?

The name “Doma” comes from the “Dao Oriented database MApping framework”.

What is annotation processing?

Annotation processing, that was introduced in Java 6, allows us to validate and generate source code at compile time.

We use annotation processing for the following purposes:

  • Generating meta classes from the classes annotated with @Entity and @Domain.
  • Generating implementation classes of the interfaces annotated with @Dao.
  • Validating SQL templates.

Runtime environment

Which libraries are required for Doma to work?

None.

Doma has no dependencies on other libraries.

Development environment

Which IDE do you recommend?

We recommend Eclipse and IntelliJ IDEA.

In Eclipse, the jar file of Doma is added to the Java Build Path but annotation processing doesn’t run.

Enable annotation processing and add the jar file to the Factory Path too. See also Build with Eclipse.

Where are generated source files in annotation processing?

In Eclipse, they are found in the .apt_generated directory.

In Eclipse, where is the .apt_generated directory?

You can find it in the Navigator view.

I get the message that the sql file is not found, but it exists.

You may get the following message, though the file exists:

[DOMA4019] The file[META-INF/../select.sql] is not found from the classpath

When you use Eclipse, check that the location of the output folder of resources is same as the one for the class files in the Java Build Path dialog.

When you use Gradle, check that the resource files are copied to compileJava.destinationDir in advance of the compileJava task. See also Build with Gradle.

Features as a database access library

Does Doma generate SQL statements?

Yes, Doma generates the following statements:

  • INSERT
  • DELETE
  • UPDATE
  • Stored procedure call
  • Stored function call

Doma doesn’t generate SELECT statements but executes arbitrary SELECT statements and maps the results to the Java objects.

See also Queries for detailed information.

How are dynamic SQL statements executed?

Dynamic SQL statements are built by directives that are represented by the SQL comments.

See also SQL templates for detail information.

Does Doma map database relationships such as one-to-one and one-to-many to Java objects?

No.

Doma only maps each row of the SQL result set to a Java entity instance.

Does Doma provide a JDBC connection pooling feature?

No.

Use Doma together with a JDBC connection pool library such as HikariCP.