Apache's Hadoop project aims to solve these problems by providing a framework for running large data processing applications on clusters of commodity hardware. Combined with Amazon EC2 for running the application, and Amazon S3 for storing the data, we can run large jobs very economically. This paper describes how to use Amazon Web Services and Hadoop to run an ad hoc analysis on a large collection of web access logs that otherwise would have cost a prohibitive amount in either time or money.
Apache Sqoop(TM) is a tool designed for efficiently transferring bulk data between Apache Hadoop and structured datastores such as relational databases.
Apache Sqoop(TM) is a tool designed for efficiently transferring bulk data between Apache Hadoop and structured datastores such as relational databases.
Arun Murthy, Release-Manager für Apache Hadoop 2.0, hat eine erste Alphaversion der kommenden Hadoop-Generation veröffentlicht, die unter anderem Hochverfügbarkeit für HDFS bietet und den
Der ehemalige Debian-Projektleiter Bruce Perens stellt ein Konzept zur Duallizenzierung von Software vor. Demnach soll Code von freien Entwicklern nie ausschließlich proprietär verwendet werden
Cascading is a Data Processing API, Process Planner, and Process Scheduler used for defining and executing complex, scale-free, and fault tolerant data processing workflows on an Apache Hadoop cluster. All without having to 'think' in MapReduce.
Cascading is a thin Java library and API that sits on top of Hadoop's MapReduce layer and is executed from the command line like any other Hadoop application.
As a library and API that can be driven from any JVM based language (Jython, JRuby, Groovy, Clojure, etc.), developers can create applications and frameworks that are "operationalized". That is, a single deployable Jar can be used to encapsulate a series of complex and dynamic processes all driven from the command line or a shell. Instead of using external schedulers to glue many individual applications together with XML against each individual command line interface.
The Cascading API approach dramatically simplifies development, regression and integration testing, and deployment of business critical applications on both Amazon Web Services (like Elastic MapReduce) or on dedicated hardware.
Cascading is not a new text based query syntax (like Pig) or another complex system that must be installed on a cluster and maintained (like Hive). But Cascading is both complimentary and a valid alternative to either application.
Cascading is an application framework for Java developers to quickly and easily develop robust Data Analytics and Data Management applications on Apache Hadoop
Many companies like IBM, Google, VMWare, and Amazon have provided products and strategies for Cloud computing. This article shows you how to use Apache Hadoop to build a MapReduce framework to make a Hadoop Cluster and how to create a sample MapReduce application which runs on Hadoop. You will also learn how to set up a time/disk-consuming task on the cloud.
This course is about scalable approaches to processing large amounts of information (terabytes and even petabytes). We focus mostly on MapReduce, which is presently the most accessible and practical means of computing at this scale, but will discuss other approaches as well.
Our world is being revolutionized by data-driven methods: access to large amounts of data has generated new insights and opened exciting new opportunities in commerce, science, and computing applications. Processing the enormous quantities of data necessary for these advances requires large clusters, making distributed computing paradigms more crucial than ever. MapReduce is a programming model for expressing distributed computations on massive datasets and an execution framework for large-scale data processing on clusters of commodity servers. The programming model provides an easy-to-understand abstraction for designing scalable algorithms, while the execution framework transparently handles many system-level details, ranging from scheduling to synchronization to fault tolerance. This book focuses on MapReduce algorithm design, with an emphasis on text processing algorithms common in natural language processing, information retrieval, and machine learning. We introduce the notion of MapReduce design patterns, which represent general reusable solutions to commonly occurring problems across a variety of problem domains. This book not only intends to help the reader "think in MapReduce", but also discusses limitations of the programming model as well.
P. Sethia, und K. Karlapalem. Engineering Applications of Artificial Intelligence, 24 (7):
1120--1127(2011)Infrastructures and Tools for Multiagent Systems.
G. Sadasivam, und G. Baktavatchalam. MDAC '10: Proceedings of the 2010 Workshop on Massive Data Analytics on the Cloud, Seite 1--7. New York, NY, USA, ACM, (2010)
D. Knoell, M. Atzmueller, C. Rieder, und K. Scherer. Proc. GWEM 2017, co-located with 9th Conference Professional Knowledge Management (WM 2017), Karlsruhe, Germany, KIT, (2017)
D. Knoell, M. Atzmueller, C. Rieder, und K. Scherer. Proc. GWEM 2017, co-located with 9th Conference Professional Knowledge Management (WM 2017), Karlsruhe, Germany, KIT, ((In Press) 2017)
D. Knoell, M. Atzmueller, C. Rieder, und K. Scherer. Proc. GWEM 2017, co-located with 9th Conference Professional Knowledge Management (WM 2017), Karlsruhe, Germany, KIT, (2017)
J. Lin. SIGIR '09: Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, Seite 155--162. New York, NY, USA, ACM, (2009)
G. Limaye, J. Chaudhary, und P. Punjabi. International Journal on Recent and Innovation Trends in Computing and Communication, 3 (3):
1699--1703(März 2015)