使用悲观锁检索的实体具有过时的值。

huangapple 未分类评论45阅读模式
英文:

Entity fetched with pessimistic locking has outdated value

问题

@Service
@Transactional
public class depositService {

    //method below is being executed concurrently
    public Deposit save(Long userId, BigDecimal amount){ //add deposit transaction into transaction list and update related user balance

        DepositTransaction depositTransaction = new DepositTransaction();
        depositTransaction.setUserId(userId);
        depositTransaction.setAmount(amount);
        DepositTransaction depositTransactionLatest = depositTransactionRepository.findTopWithLockByUser_IdOrderByIdDesc(userId).orElse(new DepositTransaction());
        depositTransaction.setBalance(depositTransactionLatest.getBalance().add(depositTransaction.getAmount())); // balance increment works well 
        depositTransaction = depositTransactionRepository.save(depositTransaction); //add into transaction table works as expected without any inconsistencies / lost update
            
        AccountBalance accountBalance = accountBalanceRepository.findOneWithLockById(depositTransaction.getUserId()).orElse(null); //update user balance
        accountBalance.setBalance(accountBalance.getBalance().add(depositTransaction.getAmount())); //most of the time, accountBalance.getBalance still having outdated value
        accountBalanceRepository.save(companyBank);
    }
}
@Repository
public interface DepositTransactionRepository{
    @Lock(LockModeType.PESSIMISTIC_WRITE)
    Optional<DepositTransaction> findTopWithLockByUser_IdOrderByIdDesc(Long userId);
}
@Repository
public interface AccountBalanceRepository{
    @Lock(LockModeType.PESSIMISTIC_WRITE)
    Optional<AccountBalanceTransaction> findOneWithLockById(Long userId);
}

基于以上代码,我有一个事务,在其中使用了两个实体的悲观锁定获取。生成的查询已经正确地包含了 "for update",并且在使用 MySQL 命令行尝试获取锁时也能正常锁定。

我的问题是,如果存在并发请求,AccountBalance 会出现不一致,但我不太明白为什么存款交易的锁定可以做到这一点。

accountBalance.getBalance() 的值总是过时的,并且需要等待几次迭代才能更新到更新后的值。

如果我在事务隔离级别中使用序列化事务和死锁时的重试机制,我可以使其正常工作。但如果可能的话,我更喜欢使用悲观锁定。

英文:

im testing some kind of small banking-ish service with mysql database.

im using 2nd level cache hibernate (possible source of problem?).

the data structure basically consists of 2 tables, a table for list of transactions (deposits) and a table for storing the latest balance for each account which should be able to be updated concurrently.

and while im testing the concurrency, i see some mistakes on my program.
here's my code, its not perfect but its actually running.

@Service
@Transactional
public class depositService {

    //method below is being executed concurrently
    public Deposit save(Long userId, BigDecimal amount){ //add deposit transaction into transaction list and update related user balance

        DepositTransaction depositTransaction = new DepositTransaction();
        depositTransaction.setUserId(userId);
        depositTransaction.setAmount(amount);
        DepositTransaction depositTransactionLatest = depositTransactionRepository.findTopWithLockByUser_IdOrderByIdDesc(userId).orElse(new DepositTransaction());
depositTransaction.setBalance(depositTransactionLatest.getBalance().add(depositTransaction.getAmount())); // balance increment works well 
        depositTransaction = depositTransactionRepository.save(depositTransaction); //add into transaction table works as expected without any inconsistencies / lost update
            
        AccountBalance accountBalance = accountBalanceRepository.findOneWithLockById(depositTransaction.getUserId()).orElse(null); //update user balance
        accountBalance.setBalance(accountBalance.getBalance().add(depositTransaction.getAmount())); //most of the time, accountBalance.getBalance still having outdated value
        accountBalanceRepository.save(companyBank);
    }
}
@Repository
public interface DepositTransactionRepository{
    @Lock(LockModeType.PESSIMISTIC_WRITE)
    Optional&lt;DepositTransaction&gt; findTopWithLockByUser_IdOrderByIdDesc(Long userId);
}
@Repository
public interface AccountBalanceRepository{
    @Lock(LockModeType.PESSIMISTIC_WRITE)
    Optional&lt;AccountBalanceTransaction&gt; findOneWithLockById(Long userId);
}

based on the above code, i have a transaction and inside it i use 2 pessimistic locking fetch for 2 entities.
the queries generated have "for update" included correctly and its locked properly since i tried to acquire the lock and failed using mysql command line.

my problem is the accountbalance will have inconsistencies if there are concurrent request, but i dont quite understand why deposit transaction's lock can do it.

the accountBalance.getBalance() will always have outdated value and will need to wait for a few iterations to change to the updated value.

i can make this work if i use serialize transaction isolation with retry mechanism in case of deadlock. but i prefer pessimistic if its possible.

答案1

得分: 0

我曾经遇到过这样一个情况:使用悲观锁获取数据时,如果恰好存在并发事务,它会正确地进行阻塞,但却会莫名其妙地将陈旧的数据返回给我。并发事务会更新数据,我期望返回的值能反映出这个更新。问题的原因是,我在获取锁之前还获取了不相关的数据,这导致 Hibernate 将其一级缓存填充了陈旧的数据。就像这样:

// 动物园拥有一个动物集合,将填充一级缓存。
Zoo zoo = zooRepository.getZoo();

// ... 与动物园无关的操作

Animal panda = animalRepository.getAnimalByIdWithExclusiveLock(animalId);
/* 
 * 即使上面的命令可能会阻塞,让你以为 Hibernate 将返回此特定
 * 动物的最新版本,但实际上,如果它已经在一级缓存中,从你获取
 * 动物园时开始,你将得到缓存的副本,这是陈旧的。
 */

我通过让 Spring 打印 SQL 并仔细检查来找出了问题。在获取锁之前,我意识到在获取动物园时,它可能会同时获取我目标的陈旧动物。

在我这种情况下,答案是在事务开始时就获取我的锁定实体,这样一级缓存在那时还没有机会变脏,就像这样:

// 首先获取重要的实体,这样我们知道它是从数据库中获取的最新数据!
Animal panda = animalRepository.getAnimalByIdWithExclusiveLock(animalId);

// 现在进行与实体无关的操作
Zoo zoo = animal.getZoo();

另一种我没有尝试过的替代方法可能是在获取锁定实体之前立即清除缓存:

Zoo zoo = animal.getZoo(); // 缓存现在变脏/陈旧!

entityManager.clear(); // 清除缓存

Animal panda = animalRepository.getAnimalByIdWithExclusiveLock(animalId);

希望这能帮助未来的某个人。

英文:

I had a case where fetching data with a pessimistic lock would correctly block if there happened to be a concurrent transaction, but would inexplicably return stale data to me. The concurrent transaction would update the data and I would expect my return value to reflect this update. The cause was because I was also fetching unrelated data prior to my lock, and this caused hibernate to seed it's first-level cache with stale data. Something like this:

// Zoo has a collection of animals which will seed the first-level cache.
Zoo zoo = zooRepository.getZoo();

// ... do unrelated work with zoo

Animal panda = animalRepository.getAnimalByIdWithExclusiveLock(animalId);
/* 
 * Even though the above command might block, making you THINK that 
 * hibernate is going to return the latest version of this specific 
 * animal, in reality, if it&#39;s already in the first-level cache, from
 * when you fetched the zoo, you will get the cached copy which is 
 * now stale.
 */

I figured this out by having Spring print the SQL and carefully examining it. Prior to my lock I realized that while fetching the Zoo, it would've simultaneously fetched my target Animal in a stale state.

In my case, the answer was to fetch my locked entities at the very beginning of the transaction so there wasn't an opportunity for the first-level cache to have gotten dirty by that time, like this:

// Fetch important entity first so we know it&#39;s fresh from the database!
Animal panda = animalRepository.getAnimalByIdWithExclusiveLock(animalId);

// Now do unrelated work 
Zoo zoo = animal.getZoo();

An alternative that I didn't try might be to flush the cache immediately prior to fetching the locked entity:

Zoo zoo = animal.getZoo(); // Cache is now dirty/stale!

entityManager.clear(); // Clear the cache

Animal panda = animalRepository.getAnimalByIdWithExclusiveLock(animalId);

Hopefully that helps someone in the future.

huangapple
  • 本文由 发表于 2020年4月7日 05:06:17
  • 转载请务必保留本文链接:https://java.coder-hub.com/61068982.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定