【Kafka】安装配置

本博客文章如无特别说明,均为原创!转载请注明出处:Big data enthusiast(http://www.lubinsu.com/)

本文链接地址:【Kafka】安装配置(http://www.lubinsu.com/kafka-install-2/)

1、下载安装包并解压,配置zookeeper节点:

[root@hadoop kafka_2.11-0.9.0.1]# cat config/zookeeper.properties
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the “License”); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an “AS IS” BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# the directory where the snapshot is stored.
dataDir=/tmp/zookeeper-logs
# the port at which the clients will connect
clientPort=2181
# disable the per-ip limit on the number of connections since this is a non-production config
maxClientCnxns=0
 
2、启动zookeeper:
bin/zookeeper-server-start.sh config/zookeeper.properties
 
3、配置kafka集群:
config/server.properties:
broker.id=0
listeners=PLAINTEXT://:9092
host.name=hadoop.slave2
log.dirs=/tmp/kafka-logs-0
 
config/server.properties:
broker.id=1
listeners=PLAINTEXT://:9093
host.name=hadoop.slave2
log.dirs=/tmp/kafka-logs-1
 
config/server.properties:
broker.id=2
listeners=PLAINTEXT://:9094
host.name=hadoop.slave2
log.dirs=/tmp/kafka-logs-2
 
config/server.properties:
broker.id=3
listeners=PLAINTEXT://:9095
host.name=hadoop.slave2
log.dirs=/tmp/kafka-logs-3
 
另外四台配置类似:
bin/kafka-server-start.sh config/server.properties &
bin/kafka-server-start.sh config/server1.properties &
bin/kafka-server-start.sh config/server2.properties &
bin/kafka-server-start.sh config/server3.properties &
 
 
测试
bin/kafka-topics.sh –create –zookeeper hadoop.slave2:2181 –replication-factor 3 –partitions 4 –topic kafkaTopic
 
bin/kafka-topics.sh –list –zookeeper hadoop.slave2:2181
 
bin/kafka-console-producer.sh –broker-list hadoop.slave2:9092 –topic kafkaTopic
bin/kafka-console-consumer.sh –zookeeper hadoop.slave2:2181 –topic kafkaTopic –from-beginning
 
Log4J配置:
log4j.rootLogger=INFO,stdout,kafka
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%5p – %m%n
# appender kafka
log4j.appender.kafka=kafka.producer.KafkaLog4jAppender
log4j.appender.kafka.topic=kafkaTopic
# multiple brokers are separated by comma “,”.
log4j.appender.kafka.brokerList=hadoop.slave2:9092, hadoop.slave2:9093, hadoop.slave2:9094, hadoop.slave2:9095
log4j.appender.kafka.compressionType=none
log4j.appender.kafka.syncSend=true
log4j.appender.kafka.layout=org.apache.log4j.PatternLayout
log4j.appender.kafka.layout.ConversionPattern=%m
pom.xml
<?xml version=”1.0″ encoding=”UTF-8″?>
<project xmlns=”http://maven.apache.org/POM/4.0.0
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.lubinsu</groupId>
    <artifactId>hadoop</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>jar</packaging>
    <name>hadoop</name>
    <url>http://maven.apache.org</url>
    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    </properties>
    <dependencies>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>3.8.1</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
        </dependency>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.9.2</artifactId>
            <version>0.8.2.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>0.8.2.1</version>
        </dependency>
        <dependency>
            <groupId>com.google.guava</groupId>
            <artifactId>guava</artifactId>
            <version>18.0</version>
        </dependency>
    </dependencies>
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <source>1.7</source>
                    <target>1.7</target>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <version>2.4.2</version>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>shade</goal>
                        </goals>
                        <configuration>
                            <transformers>
                                <transformer implementation=”org.apache.maven.plugins.shade.resource.ManifestResourceTransformer”>
                                    <mainClass>com.lubinsu.pub.Test</mainClass>
                                </transformer>
                            </transformers>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>
生产:
package com.changtu.kafka;
import org.apache.log4j.Logger;
/**
 * Created by lubinsu on 2016/2/22
 */
public class Producer {
private static final Logger LOGGER = Logger.getLogger(Producer.class);
    public static void main(String[] args) throws InterruptedException {
for (int i = 0; i < 20; i++) {
LOGGER.info(“Info [” + i + “]”);
Thread.sleep(500);
}
    }
}
消费:
bin/kafka-console-consumer.sh –zookeeper hadoop.slave2:2181 –topic kafkaTopic –from-beginning
测试样例二:
[hadoop@bigdata3 csd]$ kafka-topics –create –zookeeper bigdata1,bigdata2,bigdata4 –topic snoopy –partitions 1 –replication-factor 1
[hadoop@bigdata5 ~]$ kafka-topics –zookeeper bigdata1,bigdata2,bigdata4 –list
[hadoop@bigdata3 csd]$ kafka-console-producer –broker-list bigdata3:9092,bigdata5:9092,bigdata6:9092 –topic snoopy
[hadoop@bigdata3 ~]$ kafka-console-consumer  –zookeeper bigdata1,bigdata2,bigdata4 –topic snoopy –from-beginning

Leave a Reply

Your email address will not be published. Required fields are marked *