TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Announcing Hazelcast Jet 0.6 – The 3rd Generation Big Data Processing Engine

23 点作者 Darren1968大约 7 年前

2 条评论

gregrluck大约 7 年前
This is done using the Jet API programmatically from your Java application. Jet is meant to be used operationally with developed and deployed applications.<p>We can use HDFS as a source or a sink. See <a href="https:&#x2F;&#x2F;github.com&#x2F;hazelcast&#x2F;hazelcast-jet-code-samples&#x2F;blob&#x2F;0.6-maintenance&#x2F;batch&#x2F;wordcount-hadoop&#x2F;src&#x2F;main&#x2F;java&#x2F;HadoopWordCount.java" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;hazelcast&#x2F;hazelcast-jet-code-samples&#x2F;blob...</a> for a HDFS Wordcount example.<p>Jet jobs run in an isolated class loader, which is distributed to the cluster when the Job is started. You do this by adding classes&#x2F;jars to JobConfig. See <a href="http:&#x2F;&#x2F;docs.hazelcast.org&#x2F;docs&#x2F;jet&#x2F;0.6&#x2F;manual&#x2F;#practical-" rel="nofollow">http:&#x2F;&#x2F;docs.hazelcast.org&#x2F;docs&#x2F;jet&#x2F;0.6&#x2F;manual&#x2F;#practical-</a> considerations for details.
dxxvi大约 7 年前
If I already have a Hadoop cluster, which can run a spark job in a jar file on HDFS with spark-submit, how can I install Hazelcast Jet so that I can do the same as with Spark?