BatchWrite is a great way to speed up data loads and updates but it has some caveats that you MUST know about.
Partial success will not throw an error. You must check the unprocessed items and retry the operation
I ran into this due to API throttling. I was using
PAY_PER_REQUEST, so I expected the provisioning to scale automatically. DDB does scale automatically, but the process is not immediate, so I was losing records.
I created a wrapper function to check for and retry the operation. I was running multiple loads in parallel so I added some randomness to the retry wait time.
async function batchWrite(
Also, don’t forget to batch your requests by 25. I usually use lodash chunk.