MyException - 我的异常网
当前位置:我的异常网» XML/SOAP » Spark Solr(二)Persist Data to XML

Spark Solr(二)Persist Data to XML

www.MyException.Cn  网友分享于:2013-03-06  浏览:0次
Spark Solr(2)Persist Data to XML
Spark Solr(2)Persist Data to XML

Differences between RDD and DataFrame
RDD[Person] - Person, Person, Person
DataFrame - Person[Name, Age, Height], Person[Name, Age, Height]

RDD is collection of Person, Dataframe is a collection of Row.
Dataset VS DataFrame  df.as[ElementType], ds.toDF()

I tried to use spark-xml, but it seems not work.

Then I easily tried with a XMLStreamWriter, the Util class is as follow, XMLUtil.java
package com.sillycat.sparkjava.app;

import java.io.IOException;
import java.io.OutputStream;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.nio.file.StandardOpenOption;
import java.util.List;

import javax.xml.stream.XMLOutputFactory;
import javax.xml.stream.XMLStreamException;
import javax.xml.stream.XMLStreamWriter;

import com.sun.xml.txw2.output.IndentingXMLStreamWriter;

public class XMLUtil
{

  public static void xmlStreamWriter( String filePath, List<Job> items )
  {

    XMLStreamWriter writer = null;
    try ( OutputStream os =
      Files.newOutputStream( Paths.get( filePath ), StandardOpenOption.CREATE ) )
    {
      XMLOutputFactory outputFactory = XMLOutputFactory.newInstance();
      writer = new IndentingXMLStreamWriter(
        outputFactory.createXMLStreamWriter( os, "utf-8" ) );
      writer.writeStartDocument( "utf-8", "1.0" );
      writer.writeStartElement( "jobs" );
      for ( Job item : items )
      {
        writer.writeStartElement( "job" );

        writer.writeStartElement( "id" );
        writer.writeCharacters( item.getId() );
        writer.writeEndElement();

        writer.writeStartElement( "title" );
        writer.writeCData( item.getTitle() );
        writer.writeEndElement();

        writer.writeStartElement( "price" );
        writer.writeCharacters( item.getPrice().toBigInteger().toString() );
        writer.writeEndElement();

        writer.writeEndElement();

      }
      writer.writeEndElement();
      writer.writeEndDocument();
    }
    catch ( IOException | XMLStreamException e )
    {
      e.printStackTrace();
    }
    finally
    {
      if ( writer != null )
      {
        try
        {
          writer.close();
        }
        catch ( XMLStreamException e )
        {
          e.printStackTrace();
        }
      }
    }
  }

}


Some more implementation and refactor may needed there, but right now, I focus is to make the processing work first.

The who process will be, spark load jobs from solr cloud—> 1000 jobs a batch——> Do filter in java —> Do filter in Spark SQL —> collect the results —> Write to XML

package com.sillycat.sparkjava.app;

import java.math.BigDecimal;
import java.util.List;

importorg.apache.solr.common.SolrDocument;
import org.apache.spark.SparkConf;
import org.apache.spark.SparkContext;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Encoders;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;

import com.lucidworks.spark.rdd.SolrJavaRDD;
import com.sillycat.sparkjava.base.SparkBaseApp;

public class SeniorJavaFeedToXMLApp extends SparkBaseApp
{

  private static final long serialVersionUID = 8364133452168714109L;

  @Override
  protected String getAppName()
  {
    return "SeniorJavaFeedToXMLApp";
  }

  @Override
  public void executeTask( List<String> params )
  {
    SparkConf conf = this.getSparkConf();
    SparkContext sc = new SparkContext( conf );
    SparkSession sqlSession = new SparkSession( sc );

    String zkHost =
      "zookeeper1.us-east-1.elasticbeanstalk.com,zookeeper2.us-east-1.elasticbeanstalk.com,zookeeper3.us-east-1.elasticbeanstalk.com/solr/allJobs";
    String collection = "allJobs";
    //String solrQuery = "expired: false AND title: Java* AND source_id: 4675"; //82  18k jobs java
    String solrQuery = "expired: false AND title: Java*";
    String keyword = "Developer";

    logger.info( "Prepare the resource from " + solrQuery );
    JavaRDD<SolrDocument> rdd = this.generateRdd( sc, zkHost, collection, solrQuery );
    logger.info( "System get sources job count:" + rdd.count() );

    logger.info( "Executing the calculation based on keyword " + keyword );
    JavaRDD<SolrDocument> solrDocs = processRowFilters( rdd, keyword ); //java code filter java developer 10k
    JavaRDD<Job> jobs = solrDocs.map( new Function<SolrDocument, Job>()
    {

      private static final long serialVersionUID = -4456732708499340880L;

      @Override
      public Job call( SolrDocument solr ) throws Exception
      {
        Job job = new Job();
        job.setId( solr.getFieldValue( "id" ).toString() );
        job.setTitle( solr.getFieldValue( "title" ).toString() );
        job.setPrice( new BigDecimal( solr.getFieldValue( "cpc" ).toString() ) );
        return job;
      }

    } );

    Dataset<Row> jobDF = sqlSession.createDataFrame( jobs, Job.class );
    jobDF.createOrReplaceTempView( "job" );
    Dataset<Row> jobHighDF = sqlSession.sql( "SELECT id, title, price FROM job WHERE price > 16 " ); // price > 16 1k

    logger.info( "Find some jobs for you:" + jobHighDF.count() );
    logger.info( "Job Content is:" + jobHighDF.collectAsList().get( 0 ) );

    List<Job> jobsHigh = jobHighDF.as( Encoders.bean( Job.class ) ).collectAsList();

    logger.info( "Persist some jobs to XML:" + jobsHigh.size() );
    logger.info( "Persist some jobs to XML:" + jobsHigh.get( 0 ) );

    XMLUtil.xmlStreamWriter( "/tmp/jobs.xml", jobsHigh );

    sqlSession.close();
    sc.stop();
  }

  private JavaRDD<SolrDocument> generateRdd( SparkContext sc, String zkHost, String collection, String solrQuery )
  {
    SolrJavaRDD solrRDD = SolrJavaRDD.get( zkHost, collection, sc );
    JavaRDD<SolrDocument> resultsRDD = solrRDD.queryShards( solrQuery );
    return resultsRDD;
  }

  private JavaRDD<SolrDocument> processRowFilters( JavaRDD<SolrDocument> rows, String keyword )
  {
    JavaRDD<SolrDocument> lines = rows.filter( new Function<SolrDocument, Boolean>()
    {
      private static final long serialVersionUID = 1L;

      @Override
      public Boolean call( SolrDocument s ) throws Exception
      {
        Object titleObj = s.getFieldValue( "title" );
        if ( titleObj != null )
        {
          String title = titleObj.toString();
          if ( title.toLowerCase().contains( keyword.toLowerCase() ) )
          {
            return true;
          }
        }
        return false;
      }
    } );
    return lines;
  }

}

Exception:
Upload to HDFS
18/01/23 23:55:11 INFO Client: Uploading resource file:/opt/spark/jars/metrics-core-3.1.2.jar -> hdfs://fr-stage-api:9000/user/ec2-user/.sparkStaging/application_1515130308141_0017/metrics-core-3.1.2.jar

Error on YARN
java.io.FileNotFoundException: File does not exist: hdfs://fr-stage-api:9000/user/ec2-user/.sparkStaging/application_1515130308141_0017/metrics-core-3.1.2.jar

Solution:
Comments out the line in Base Class
conf.set( "spark.master", "local[4]" );

References:
http://sillycat.iteye.com/blog/2407961
https://www.jianshu.com/p/c0181667daa0

xml
https://www.ibm.com/developerworks/library/x-tipbigdoc/index.html
https://softwarecave.org/2014/02/15/write-xml-documents-using-streaming-api-for-xml-stax/
https://www.ibm.com/developerworks/cn/xml/x-tipstx4/index.html
https://stackoverflow.com/questions/4616383/xmlstreamwriter-indentation/6723007

submit task
https://github.com/mahmoudparsian/data-algorithms-book/blob/master/misc/how-to-submit-spark-job-to-yarn-from-java-code.md
https://stackoverflow.com/questions/44444215/submitting-spark-application-via-yarn-client
http://massapi.com/method/org/apache/hadoop/conf/Configuration.addResource.html
https://stackoverflow.com/questions/17265002/hadoop-no-filesystem-for-scheme-file

文章评论

每天工作4小时的程序员
每天工作4小时的程序员
代码女神横空出世
代码女神横空出世
我的丈夫是个程序员
我的丈夫是个程序员
什么才是优秀的用户界面设计
什么才是优秀的用户界面设计
鲜为人知的编程真相
鲜为人知的编程真相
为什么程序员都是夜猫子
为什么程序员都是夜猫子
Java 与 .NET 的平台发展之争
Java 与 .NET 的平台发展之争
Java程序员必看电影
Java程序员必看电影
不懂技术不要对懂技术的人说这很容易实现
不懂技术不要对懂技术的人说这很容易实现
那些争议最大的编程观点
那些争议最大的编程观点
程序员必看的十大电影
程序员必看的十大电影
如何区分一个程序员是“老手“还是“新手“?
如何区分一个程序员是“老手“还是“新手“?
初级 vs 高级开发者 哪个性价比更高?
初级 vs 高级开发者 哪个性价比更高?
2013年中国软件开发者薪资调查报告
2013年中国软件开发者薪资调查报告
看13位CEO、创始人和高管如何提高工作效率
看13位CEO、创始人和高管如何提高工作效率
当下全球最炙手可热的八位少年创业者
当下全球最炙手可热的八位少年创业者
程序员眼里IE浏览器是什么样的
程序员眼里IE浏览器是什么样的
一个程序员的时间管理
一个程序员的时间管理
程序员周末都喜欢做什么?
程序员周末都喜欢做什么?
 程序员的样子
程序员的样子
科技史上最臭名昭著的13大罪犯
科技史上最臭名昭著的13大罪犯
Web开发人员为什么越来越懒了?
Web开发人员为什么越来越懒了?
老程序员的下场
老程序员的下场
“肮脏的”IT工作排行榜
“肮脏的”IT工作排行榜
漫画:程序员的工作
漫画:程序员的工作
聊聊HTTPS和SSL/TLS协议
聊聊HTTPS和SSL/TLS协议
总结2014中国互联网十大段子
总结2014中国互联网十大段子
写给自己也写给你 自己到底该何去何从
写给自己也写给你 自己到底该何去何从
10个帮程序员减压放松的网站
10个帮程序员减压放松的网站
10个调试和排错的小建议
10个调试和排错的小建议
十大编程算法助程序员走上高手之路
十大编程算法助程序员走上高手之路
程序员最害怕的5件事 你中招了吗?
程序员最害怕的5件事 你中招了吗?
程序员的一天:一寸光阴一寸金
程序员的一天:一寸光阴一寸金
程序员都该阅读的书
程序员都该阅读的书
程序员应该关注的一些事儿
程序员应该关注的一些事儿
程序员的鄙视链
程序员的鄙视链
团队中“技术大拿”并非越多越好
团队中“技术大拿”并非越多越好
程序员和编码员之间的区别
程序员和编码员之间的区别
“懒”出效率是程序员的美德
“懒”出效率是程序员的美德
老美怎么看待阿里赴美上市
老美怎么看待阿里赴美上市
为啥Android手机总会越用越慢?
为啥Android手机总会越用越慢?
要嫁就嫁程序猿—钱多话少死的早
要嫁就嫁程序猿—钱多话少死的早
做程序猿的老婆应该注意的一些事情
做程序猿的老婆应该注意的一些事情
旅行,写作,编程
旅行,写作,编程
Web开发者需具备的8个好习惯
Web开发者需具备的8个好习惯
编程语言是女人
编程语言是女人
那些性感的让人尖叫的程序员
那些性感的让人尖叫的程序员
中美印日四国程序员比较
中美印日四国程序员比较
Google伦敦新总部 犹如星级庄园
Google伦敦新总部 犹如星级庄园
软件开发程序错误异常ExceptionCopyright © 2009-2015 MyException 版权所有