Scale Your Data Collection on the Cloud Like a Champ

Понравилась презентация – покажи это...

Слайд 0

Scale Your Data Collection on the Cloud Like a Champ Moty Michaely, VP R&D Xplenty

Слайд 1

Scaling data collection = a pain Plenty of companies are limited by their data collection methods when it comes to scalability. Once they need more detailed data and in larger quantities, scaling the system can become a major pain.

Слайд 2

Three common methods for collecting big data... Is your company using the right one? Storing directly in the DB Keeping it in a local file S3/CloudFront logging

Слайд 3

Storing Directly in the DB This is what companies usually start with. As the name suggests, data is inserted right into the DB. There are two ways to do it: Row by row means the data is added as a row to the DB in real time. Bulk insert adds multiple rows to the DB in one transaction. (It’s faster than row by row, but insertion of the entire batch may fail, thus having to re-insert a big chunk of data.)

Слайд 4

Pros for Storing Directly in the DB Better performance than other methods for inserting data. Real-time data available when adding row by row.

Слайд 5

Cons for Storing Directly in the DB Schema changes are required to add new types of data. Scaling is required in two layers - application and database. Scaling the application is usually easier (using a network load balancer for example) but scaling the database requires hiring an expert DBA, partitioning the DB, and scaling up the server. (Relational DBs that scale out to multiple nodes are expensive and require a lot of maintenance.)

Слайд 6

Bottom line Storing directly in the DB gives you fast performance, but it doesn’t scale.

Слайд 7

Keeping it Local Data is dumped in big local files. These files are periodically uploaded via a program to S3 or inserted in batches into a NoSQL DB, such as Amazon DynamoDB or a data warehouse like Amazon RedShift.

Слайд 8

Pros for Keeping it in a local file New types of data can be added easily since no schema changes are required. Compatible with all applications because any file format can be used. Quicker filtering via customized directory/file names, e.g. with date/time indication.

Слайд 9

cons for Keeping it in a local file One needs to develop a tracking program to deal with the files - rotating logs while more data is incoming, handling failures, and transactionality. Even if you have the manpower, time, and money, it’s hard to develop such a program. Scaling means adding more servers, more maintenance, and more money. Data is not as query-able compared to storage in a DB. Staging and production environments require extra servers.

Слайд 10

Bottom Line More flexible than direct DB storage, but requires more development, and scaling is still an issue.

Слайд 11

S3/CloudFront Logging This old school solution goes back to the early days when visitor counters and burning “hot!” animations ruled the web. To track an event, an HTTP request is sent for a 1x1 pixel image from a relevant S3 directory. Accessing the image automatically generates a W3C log with all HTTP request parameters: IP address, browser, date/time, etc. Extra session level data like username or mouse position is passed via the query string. To differentiate between event types, images are placed in accordingly named directories, e.g. /click/.

Слайд 12

Pros for S3/CloudFront Logging No tracking server required - data reaches S3 automatically. No file management - Amazon handles all file monkey business. No servers - Amazon provides them. Cost effective - only log storage and bandwidth are paid for. The logs take little space since they are all GZipped and the bandwidth for 1x1 pixel images is marginal.

Слайд 13

Pros for S3/CloudFront Logging continued Easily scalable with practically infinite space and firepower. Quick and easy to implement. Simple setup for staging/production environments via additional distributions and a prefix. Web application performance unharmed, especially using the CloudFront CDN.

Слайд 14

Cons for S3/CloudFront Logging Slower filtering performance compared to local setup. Amazon handles log file/directory names automatically and no customization is available. Not suitable for real time or impatience. Data is aggregated into a new file in the bucket only once per hour, and that’s Amazon’s best effort so it could take longer. Data is not as query-able compared to storage in a DB. Vendor dependent. Having your servers outside of Amazon will decrease performance. No control over the file format. W3C Extended Log File Format is mandatory and some applications may not like that.

Слайд 15

Bottom Line Quick, cheap, and scalable though it doesn’t provide the best performance and customization.

Слайд 16

What’s right for you? So much emphasis has been put on the technologies used for processing, analyzing, and visualizing data. But so often getting lost in the shuffle is the importance of the collection of this data. The two go hand in hand. To get good output from your data, you must first have proper input. Only once you have achieved the synergy between the two will you fully be able to tap into your data’s potential.

Слайд 17

Xplenty www.xplenty.com