我已经尝试了两天,使用 Spring-Data-JPA 在我的 Postgres 数据库中存储包含大约 600 万个条目的数组列表。 整个事情有效,但速度非常慢。我做完所有事情大约需要 27 分钟。 我已经尝试过批量大小,但这并没有带来太大成功。我还注意到, table 越大,保存所需的时间就越长。有办法加快速度吗? 我之前已经用 SQLite 完成了整个过程,同样的时间我只需要大约 15 秒。
我的实体
@Data
@Entity
@Table(name = "commodity_prices")
public class CommodityPrice {
@Id
@Column( name = "id" )
@GeneratedValue( strategy = GenerationType.SEQUENCE )
private long id;
@Column(name = "station_id")
private int station_id;
@Column(name = "commodity_id")
private int commodity_id;
@Column(name = "supply")
private long supply;
@Column(name = "buy_price")
private int buy_price;
@Column(name = "sell_price")
private int sell_price;
@Column(name = "demand")
private long demand;
@Column(name = "collected_at")
private long collected_at;
public CommodityPrice( int station_id, int commodity_id, long supply, int buy_price, int sell_price, long demand,
long collected_at ) {
this.station_id = station_id;
this.commodity_id = commodity_id;
this.supply = supply;
this.buy_price = buy_price;
this.sell_price = sell_price;
this.demand = demand;
this.collected_at = collected_at;
}
}
我的插入类
@Slf4j
@Component
public class CommodityPriceHandler {
@Autowired
CommodityPriceRepository commodityPriceRepository;
@Autowired
private EntityManager entityManager;
public void inserIntoDB() {
int lineCount = 0;
List<CommodityPrice> commodityPrices = new ArrayList<>( );
StopWatch stopWatch = new StopWatch();
stopWatch.start();
try {
Reader reader = new FileReader( DOWNLOAD_SAVE_PATH + FILE_NAME_COMMODITY_PRICES );
Iterable<CSVRecord> records = CSVFormat.EXCEL.withFirstRecordAsHeader().parse( reader );
for( CSVRecord record : records ) {
int station_id = Integer.parseInt( record.get( "station_id" ) );
int commodity_id = Integer.parseInt( record.get( "commodity_id" ) );
long supply = Long.parseLong( record.get( "supply" ) );
int buy_price = Integer.parseInt( record.get( "buy_price" ) );
int sell_price = Integer.parseInt( record.get( "sell_price" ) );
long demand = Long.parseLong( record.get( "demand" ) );
long collected_at = Long.parseLong( record.get( "collected_at" ) );
CommodityPrice commodityPrice = new CommodityPrice(station_id, commodity_id, supply, buy_price, sell_price, demand, collected_at);
commodityPrices.add( commodityPrice );
if (commodityPrices.size() == 1000){
commodityPriceRepository.saveAll( commodityPrices );
commodityPriceRepository.flush();
entityManager.clear();
commodityPrices.clear();
System.out.println(lineCount);
}
lineCount ++;
}
}
catch( IOException e ) {
log.error( e.getLocalizedMessage() );
}
commodityPriceRepository.saveAll( commodityPrices );
stopWatch.stop();
log.info( "Successfully inserted " + lineCount + " lines in " + stopWatch.getTotalTimeSeconds() + " seconds." );
}
}
我的应用程序属性
# HIBERNATE
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect
spring.jpa.properties.hibernate.jdbc.lob.non_contextual_creation=true
spring.jpa.hibernate.ddl-auto = update
spring.jpa.properties.hibernate.jdbc.batch_size=1000
spring.jpa.properties.hibernate.order_inserts=true
最佳答案
当您批量执行插入时,您的序列生成策略仍然要求您为插入的每条记录发出一个语句。因此,对于 1000 条记录的批量大小,您会发出 1001 条语句,这显然不是预期的结果。
我的建议:
启用 SQL 日志记录以查看哪些语句发送到您的数据库。我个人使用datasource-proxy ,但可以使用任何您满意的东西。
修改您的序列生成器。至少使用
@Id
@Column( name = "id" )
@GeneratedValue(generator = "com_pr_generator", strategy = GenerationType.SEQUENCE )
@SequenceGenerator(name="com_pr_generator", sequenceName = "book_seq", allocationSize=50)
private long id;
关于java - Spring Data JPA BigList 插入,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55837289/