DynamoDB NEEDED the memory cache & auto-scaling built in, Hallelujah

When Amazon AWS Dynamo DB came out I celebrated. Yay!!!! Long had we struggled with our own implementation of a key-value database trying to get tertiary to a quaternary level of redundancy to ensure data was never lost. I happily threw in the garbage for our solution to embrace Amazon AWS Dynamo DB. After using it for a while the only two real weaknesses it had were no memory caching and no autoscaling throughput mechanism. As we were trying to be almost all Lambda functions this was irritating.  Lambda functions being stateless can't really hold the cache and using them to do DynamoDB throughput scaling was difficult. We were going to either use EC2 machines for caching/scaling or abuse Lambda functions to achieve the same.  We choose the latter.  Why?  I didn’t want to break the seal on using “EC2 machines” again.

We were able to achieve it using Lambda functions but in a way, I didn’t like it.  But we did.  AWS already provides memory cache solutions, it was only a matter of tricking Lambda functions to leverage state in the Lambda function. (a stateless no-no but it really still has state ;-) ).  We would ping it to keep it in memory, even if no reason other than to keep its state.  

DynamoDB auto-scaling was borrowed from the open-source solution Dynamic DynamoDB.  Having timed Lambda functions took care of monitoring and fixing throughput.  But, it was clunky.

AWS has now solved the problem with the auto-scaling for DynamoDB and the DynamoDB memory cache.  They did this with a very simple interface on the console.  Long overdue but awesome.  Other than wishing it was free (cheaper) it’s perfect.

Thank you AWS.


Leave a comment

Please note, comments must be approved before they are published