`

Using Hadoop Distributed Cache

阅读更多

来源:http://www.ashishpaliwal.com/blog/2012/04/using-hadoop-distributed-cache/

 

 

Hadoop has a distributed cache mechanism to make available file locally that may be needed by Map/Reduce jobs. This post tried to expand a bit more on the information provided by the javadoc ofDistributedCache

Use Case

Lets understand our Use Case a bit more in details so that we can follow-up the code snippets.
We have a Key-Value file that we need to use in our Map jobs. For simplicity, lets say we need to replace all keywords that we encounter during parsing, with some other value.

So what we need is

  • A key-values files (Lets use a Properties files)
  • The Mapper code that uses the code

Step 1

Place the key-values file on the HDFS

1.hadoop fs -put ./keyvalues.properties cache/keyvalues.properties

This path is relative to the user's home folder on HDFS

Step 2

Write the Mapper code that uses it

01.publicclassDistributedCacheMapperextendsMapper<LongWritable, Text, Text, Text> {
02.
03.Properties cache;
04.
05.@Override
06.protectedvoidsetup(Context context)throwsIOException, InterruptedException {
07.super.setup(context);
08.Path[] localCacheFiles = DistributedCache.getLocalCacheFiles(context.getConfiguration());
09.
10.if(localCacheFiles !=null) {
11.// expecting only single file here
12.for(inti =0; i < localCacheFiles.length; i++) {
13.Path localCacheFile = localCacheFiles[i];
14.cache =newProperties();
15.cache.load(newFileReader(localCacheFile.toString()));
16.}
17.}else{
18.// do your error handling here
19.}
20.
21.}
22.
23.@Override
24.publicvoidmap(LongWritable key, Text value, Context context)throwsIOException, InterruptedException {
25.// use the cache here
26.// if value contains some attribute, cache.get(<value>)
27.// do some action or replace with something else
28.}
29.
30.}

Mapper code is simple enough. During the setup phase, we read the file and populate the Properties object. And inside the map() we use the cache to lookup for certain keys and replace them, if they are present.

Step 3

Add the properties file to your driver code

1.JobConf jobConf =newJobConf();
2.// set job properties
3.// set the cache file
4.DistributedCache.addCacheFile(newURI("cache/keyvalues.properties#keyvalues.properties"), jobConf);

 

一些资料:

 

DistributedCache

 

DistributedCache可将具体应用相关的、大尺寸的、只读的文件有效地分布放置。

DistributedCache是Map/Reduce框架提供的功能,能够缓存应用程序所需的文件 (包括文本,档案文件,jar文件等)。

应用程序在JobConf中通过url(hdfs://)指定需要被缓存的文件。DistributedCache假定由hdfs://格式url指定的文件已经在FileSystem上了。

Map-Redcue框架在作业所有任务执行之前会把必要的文件拷贝到slave节点上。 它运行高效是因为每个作业的文件只拷贝一次并且为那些没有文档的slave节点缓存文档。

DistributedCache根据缓存文档修改的时间戳进行追踪。 在作业执行期间,当前应用程序或者外部程序不能修改缓存文件。

distributedCache可以分发简单的只读数据或文本文件,也可以分发复杂类型的文件例如归档文件和jar文件。归档文件(zip,tar,tgz和tar.gz文件)在slave节点上会被解档(un-archived)。 这些文件可以设置执行权限

用户可以通过设置mapred.cache.{files|archives}来分发文件。 如果要分发多个文件,可以使用逗号分隔文件所在路径。也可以利用API来设置该属性:DistributedCache.addCacheFile(URI,conf)/DistributedCache.addCacheArchive(URI,conf)andDistributedCache.setCacheFiles(URIs,conf)/DistributedCache.setCacheArchives(URIs,conf)其中URI的形式是hdfs://host:port/absolute-path#link-name在Streaming程序中,可以通过命令行选项-cacheFile/-cacheArchive分发文件。

用户可以通过DistributedCache.createSymlink(Configuration)方法让DistributedCache当前工作目录下创建到缓存文件的符号链接。 或者通过设置配置文件属性mapred.create.symlinkyes。 分布式缓存会截取URI的片段作为链接的名字。 例如,URI是hdfs://namenode:port/lib.so.1#lib.so, 则在task当前工作目录会有名为lib.so的链接, 它会链接分布式缓存中的lib.so.1

DistributedCache可在map/reduce任务中作为 一种基础软件分发机制使用。它可以被用于分发jar包和本地库(native libraries)。DistributedCache.addArchiveToClassPath(Path, Configuration)DistributedCache.addFileToClassPath(Path, Configuration)API能够被用于 缓存文件和jar包,并把它们加入子jvm的classpath。也可以通过设置配置文档里的属性mapred.job.classpath.{files|archives}达到相同的效果。缓存文件可用于分发和装载本地库。

 

分享到:
评论

相关推荐

    The Hadoop Distributed File System

    The Hadoop Distributed File System,学习云计算方面知识必不可少的材料,相信学过之后会对文件系统有新的理解。

    Hadoop Distributed File System for the Grid

    Hadoop Distributed File System for the Grid.

    Data-intensive Systems: Principles and Fundamentals using Hadoop and Spark

    Data-intensive Systems: Principles and Fundamentals using Hadoop and Spark (Advanced Information and Knowledge Processing) By 作者: Tomasz – Wiktorski – Tomasz Wiktorski ISBN-10 书号: 3030046028 ...

    Face Recognition(face_recognition) Using Hadoop Streaming API

    Face Recognition(face_recognition) Using Hadoop Streaming API Face Recognition(face_recognition) Using Hadoop Streaming API

    藏经阁-Using Hadoop to build a Data Quality Service for both real-t

    藏经阁-Using Hadoop to build a Data Quality Service for both real-time and batch data.pdf

    hadoop-3.1.3安装包

    Hadoop实现了一个分布式文件系统(Hadoop Distributed File System),简称HDFS。HDFS有高容错性的特点,并且设计用来部署在低廉的(low-cost)硬件上;而且它提供高吞吐量(high throughput)来访问应用程序的数据,适合...

    Apache Flume, Distributed Log Collection for Hadoop(第二版)

    Apache Flume, Distributed Log Collection for Hadoop,2015 第二版,Packt Publishing

    Hadoop: The Definitive Guide

    * Use the Hadoop Distributed File System (HDFS) for storing large datasets, and run distributed computations over those datasets using MapReduce * Become familiar with Hadoop's data and I/O building...

    Hadoop下载 hadoop-3.3.3.tar.gz

    Hadoop实现了一个分布式文件系统( Distributed File System),其中一个组件是HDFS(Hadoop Distributed File System)。HDFS有高容错性的特点,并且设计用来部署在低廉的(low-cost)硬件上;而且它提供高吞吐量...

    Deep Learning with Hadoop

    By the end of this book, you will know how to deploy deep learning in distributed systems using Hadoop What you will learn Explore Deep Learning and various models associated with it. Understand the ...

    hadoop3.3.3-winutils

    Hadoop3.x在组成上没有变化Hadoop Distributed File System,简称HDFS,是一个分布式文件系统。 (1)NameNode(nn):存储文件的元数据,如文件名,文件目录结构,文件属性(生成时间、副本数、文件权限),以及每...

    Hadoop in 24 Hours, Sams Teach Yourself

    Understanding Hadoop and the Hadoop Distributed File System (HDFS) Importing data into Hadoop, and process it there Mastering basic MapReduce Java programming, and using advanced MapReduce API ...

    Face_Detection_Using_Hadoop

    Face_Detection_Using_Hadoop Face_Detection_Using_Hadoop

    apache hadoop 2.7.2.chm

    Distributed Cache Deploy MapReduce REST APIs MR Application Master MR History Server YARN Overview YARN Architecture Capacity Scheduler Fair Scheduler ResourceManager Restart ResourceManager ...

    使用Hadoop构建云计算平台

    资源名称:使用Hadoop构建云计算平台内容简介:• 核心框架: HDFS和MapReduce• MapReduce — 任务的分解与结果的汇总• HDFS — Hadoop Distributed File System• — 分布式计算的基石Hadoop是一个Apache的开源...

    hadoop-0.1.0

    Hadoop实现了一个分布式文件系统(Hadoop Distributed File System),简称HDFS。HDFS有高容错性的特点,并且设计用来部署在低廉的(low-cost)硬件上;而且它提供高吞吐量(high throughput)来访问应用程序的数据...

    hadoop-3.0.0.tar.gz

    Hadoop实现了一个分布式文件系统(Hadoop Distributed File System),简称HDFS。HDFS有高容错性的特点,并且设计用来部署在低廉的(low-cost)硬件上;而且它提供高吞吐量(high throughput)来访问应用程序的数据...

    Hadoop权威掼.pdf

    Hadoop: The Definitive Guide is a comprehensive resource for using Hadoop to build reliable, scalable, distributed systems. Programmers will find details for analyzing large datasets with Hadoop, and...

    Hadoop in Action

    HIGHLIGHT Hadoop in Action is an example-rich tutorial that shows developers how to implement data-intensive distributed computing using Hadoop and the Map- Reduce framework. DESCRIPTION Hadoop is an ...

    Hadoop-2.6.4.rar

    Hadoop实现了一个分布式文件系统(Hadoop Distributed File System),简称HDFS。HDFS有高容错性的特点,并且设计用来部署在低廉的(low-cost)硬件上;而且它提供高吞吐量(high throughput)来访问应用程序的数据...

Global site tag (gtag.js) - Google Analytics