Skip to Content

Contributors

ERROR: could not serialize access due to concurrent update (case using Job Queue)

Dear community,

We have a case that needs to process a lot of transactions (500k arrive on the last day of month). And so we rely on our best friend OCA's Job Queue and have things run in parallel.

Most process are OK, but the one creates stock picking, jobs can't run in parallel because there is a concurrent issue on the "stock_quant" table, which looks like many separated job is updating the same record.

bad query:  update stock_quant set reserved_quantity = 10.00 ... where id in (100)
ERROR: could not serialize access due to concurrent update
bad query:  update stock_quant set reserved_quantity = 10.00 ... where id in (100)
ERROR: could not serialize access due to concurrent update
.....

Concurrent updates are very common issues we always face. How do you get around with this problem?

Thank you,
Kitti U.









by Kitti Upariphutthiphong - 02:16 - 4 Mar 2024

Follow-Ups

  • Re: ERROR: could not serialize access due to concurrent update (case using Job Queue)
    Thanks everyone!

    Reservation Method = Manual, sounds like a valid solution. We will test and report the result. 

    On Mon, Mar 4, 2024 at 9:32 PM Pedro M. Baeza <notifications@odoo-community.org> wrote:
    If talking about picking generation, I wouldn't do reserve at that time, and do a general "reserve round" at the end of the batch, and thus, you remove the quant lock constraint.

    Regards.

    _______________________________________________
    Mailing-List: https://odoo-community.org/groups/contributors-15
    Post to: mailto:contributors@odoo-community.org
    Unsubscribe: https://odoo-community.org/groups?unsubscribe


    by Kitti Upariphutthiphong - 05:00 - 4 Mar 2024
  • Re: ERROR: could not serialize access due to concurrent update (case using Job Queue)
    If talking about picking generation, I wouldn't do reserve at that time, and do a general "reserve round" at the end of the batch, and thus, you remove the quant lock constraint.

    Regards.

    by Pedro M. Baeza - 03:31 - 4 Mar 2024
  • Re: ERROR: could not serialize access due to concurrent update (case using Job Queue)
    On 3/4/24 15:07, Kitti Upariphutthiphong wrote:
    
    >
    
    > I was thinking if there are anyway to unlock the table at least 
    
    > temporarily during execution. But as far as I researching, I still 
    
    > can't find the way.
    
    I don't think "temporary unlock" is possible, or advisable, but another 
    way is to lock the table as late as possible, so, closest before 
    commit() of your transaction. That way, the time that your lock persists 
    is smallest and the chance for conflict is lowest (the lower you get it, 
    the more viable it will be to just rely on RetryableJobError for the 
    small amount of cases where a conflict arises).
    
    A strategy for this can be to do the thing that locks, and right after 
    that, fire a new queue job that will do the rest of the stuff.
    
    We've had success with this in cases whereby you have for example:
    
    Process payment transaction job:
    
    1. Start database transaction
    2. Create payment transaction
    3. Confirm sale.order, which may generate a stock.picking and confirm 
    it, thereby locking quant table
    4. Generate invoice (during this time some rows in quant table will 
    still be locked, conflicts can occur)
    5. Send out invoice by mail (during this time some rows in quant table 
    will still be locked, conflicts can occur)
    6. End of database transaction (commit)
    
    Instead, you will add "with_delay()" around steps 4+5 so that these are 
    run in a separate queue job, for which the locking does not apply.
    
    Of course this requires refactoring of core or custom code so it might 
    not be a viable solution in your case.
    
    
    
    
    

    by Tom Blauwendraat - 03:26 - 4 Mar 2024
  • Re: ERROR: could not serialize access due to concurrent update (case using Job Queue)
    Hello,

    Your problem seems to be linked to stock reservation. By default, picking types (Operation types) are configured to make the stock reservation at picking confirmation. If this is the case, these concurrent update errors are not surprising if the created pickings contain the same product.
    You could try to change the "Reservation Method" on the concerned picking type(s) to "manual". And then manage the stock reservation on picking one by one afterward.

    Regards,
    Florian

    Le lun. 4 mars 2024 à 15:07, Kitti Upariphutthiphong <notifications@odoo-community.org> a écrit :
    Thanks Adam,

    In fact, if we don't have time constraints, it will work.

    The problem is we really need to have many job (like 10 processes that create picking) to run simultaneously and without locking in order to achieve 500k records (more in the future) in very limit time (couple hours).

    I was thinking if there are anyway to unlock the table at least temporarily during execution. But as far as I researching, I still can't find the way yet.

    On Mon, Mar 4, 2024 at 8:37 PM Adam Heinz <notifications@odoo-community.org> wrote:
    I have a couple of strategies that I use, neither of which I am in love with:

    1. Catch the serialization error and reraise a RetryableJobError. This works well enough when serialization errors are intermittent and the job has no side-effects.
    2. Set ODOO_QUEUE_JOB_CHANNELS=root:32,single:1 in the environment, and put problematic jobs into the `single` channel. This is a tool of last resort as it slows problematic jobs down to single threaded, but I have found it necessary when the serialization errors occur on basically every execution.

    On Mon, Mar 4, 2024 at 8:17 AM Kitti Upariphutthiphong <notifications@odoo-community.org> wrote:
    Dear community,

    We have a case that needs to process a lot of transactions (500k arrive on the last day of month). And so we rely on our best friend OCA's Job Queue and have things run in parallel.

    Most process are OK, but the one creates stock picking, jobs can't run in parallel because there is a concurrent issue on the "stock_quant" table, which looks like many separated job is updating the same record.

    bad query:  update stock_quant set reserved_quantity = 10.00 ... where id in (100)
    ERROR: could not serialize access due to concurrent update
    bad query:  update stock_quant set reserved_quantity = 10.00 ... where id in (100)
    ERROR: could not serialize access due to concurrent update
    .....

    Concurrent updates are very common issues we always face. How do you get around with this problem?

    Thank you,
    Kitti U.








    _______________________________________________
    Mailing-List: https://odoo-community.org/groups/contributors-15
    Post to: mailto:contributors@odoo-community.org
    Unsubscribe: https://odoo-community.org/groups?unsubscribe

    _______________________________________________
    Mailing-List: https://odoo-community.org/groups/contributors-15
    Post to: mailto:contributors@odoo-community.org
    Unsubscribe: https://odoo-community.org/groups?unsubscribe

    _______________________________________________
    Mailing-List: https://odoo-community.org/groups/contributors-15
    Post to: mailto:contributors@odoo-community.org
    Unsubscribe: https://odoo-community.org/groups?unsubscribe


    by Florian da Costa - 03:15 - 4 Mar 2024
  • Re: ERROR: could not serialize access due to concurrent update (case using Job Queue)
    Thanks Adam,

    In fact, if we don't have time constraints, it will work.

    The problem is we really need to have many job (like 10 processes that create picking) to run simultaneously and without locking in order to achieve 500k records (more in the future) in very limit time (couple hours).

    I was thinking if there are anyway to unlock the table at least temporarily during execution. But as far as I researching, I still can't find the way yet.

    On Mon, Mar 4, 2024 at 8:37 PM Adam Heinz <notifications@odoo-community.org> wrote:
    I have a couple of strategies that I use, neither of which I am in love with:

    1. Catch the serialization error and reraise a RetryableJobError. This works well enough when serialization errors are intermittent and the job has no side-effects.
    2. Set ODOO_QUEUE_JOB_CHANNELS=root:32,single:1 in the environment, and put problematic jobs into the `single` channel. This is a tool of last resort as it slows problematic jobs down to single threaded, but I have found it necessary when the serialization errors occur on basically every execution.

    On Mon, Mar 4, 2024 at 8:17 AM Kitti Upariphutthiphong <notifications@odoo-community.org> wrote:
    Dear community,

    We have a case that needs to process a lot of transactions (500k arrive on the last day of month). And so we rely on our best friend OCA's Job Queue and have things run in parallel.

    Most process are OK, but the one creates stock picking, jobs can't run in parallel because there is a concurrent issue on the "stock_quant" table, which looks like many separated job is updating the same record.

    bad query:  update stock_quant set reserved_quantity = 10.00 ... where id in (100)
    ERROR: could not serialize access due to concurrent update
    bad query:  update stock_quant set reserved_quantity = 10.00 ... where id in (100)
    ERROR: could not serialize access due to concurrent update
    .....

    Concurrent updates are very common issues we always face. How do you get around with this problem?

    Thank you,
    Kitti U.








    _______________________________________________
    Mailing-List: https://odoo-community.org/groups/contributors-15
    Post to: mailto:contributors@odoo-community.org
    Unsubscribe: https://odoo-community.org/groups?unsubscribe

    _______________________________________________
    Mailing-List: https://odoo-community.org/groups/contributors-15
    Post to: mailto:contributors@odoo-community.org
    Unsubscribe: https://odoo-community.org/groups?unsubscribe


    by Kitti Upariphutthiphong - 03:05 - 4 Mar 2024