installation - namenode is not formatted in hadoop -


in normal account.

i created directories.

/usr/local/hadoop-2.7.3/data/dfs/namenode /usr/local/hadoop-2.7.3/data/dfs/namesecondary /usr/local/hadoop-2.7.3/data/dfs/datanode /usr/local/hadoop-2.7.3/data/yarn/nm-local-dir /usr/local/hadoop-2.7.3/data/yarn/system/rmstore 

and typed commands

bin/hdfs namenode –format sudo sbin/start-all.sh jps 

then

in normal account, see jps.

in root account, see jps, datanode, secondarynamenode, nodemanager , resourcemanager.

i have 2 questions.

  1. why can see jps in normal account?
  2. why namenode not started?

thanks reading. , if me, appreciate you.

namenode log file

2017-04-06 01:16:15,217 info org.apache.hadoop.hdfs.server.namenode.namenode: registered unix signal handlers [term, hup, int]  2017-04-06 01:16:15,220 info org.apache.hadoop.hdfs.server.namenode.namenode: createnamenode []  2017-04-06 01:16:15,680 info org.apache.hadoop.metrics2.impl.metricsconfig: loaded properties hadoop-metrics2.properties  2017-04-06 01:16:15,843 info org.apache.hadoop.metrics2.impl.metricssystemimpl: scheduled snapshot period @ 10 second(s).  2017-04-06 01:16:15,843 info org.apache.hadoop.metrics2.impl.metricssystemimpl: namenode metrics system started  2017-04-06 01:16:15,845 info org.apache.hadoop.hdfs.server.namenode.namenode: fs.defaultfs hdfs://localhost:9010  2017-04-06 01:16:15,846 info org.apache.hadoop.hdfs.server.namenode.namenode: clients use localhost:9010 access namenode/service.  2017-04-06 01:16:16,070 info org.apache.hadoop.hdfs.dfsutil: starting web-server hdfs at: http://localhost:50070  2017-04-06 01:16:16,152 info org.mortbay.log: logging org.slf4j.impl.log4jloggeradapter(org.mortbay.log) via org.mortbay.log.slf4jlog  2017-04-06 01:16:16,158 info org.apache.hadoop.security.authentication.server.authenticationfilter: unable initialize filesignersecretprovider, falling use random secrets.  2017-04-06 01:16:16,165 info org.apache.hadoop.http.httprequestlog: http request log http.requests.namenode not defined  2017-04-06 01:16:16,169 info org.apache.hadoop.http.httpserver2: added global filter 'safety' (class=org.apache.hadoop.http.httpserver2$quotinginputfilter)  2017-04-06 01:16:16,171 info org.apache.hadoop.http.httpserver2: added filter static_user_filter (class=org.apache.hadoop.http.lib.staticuserwebfilter$staticuserfilter) context hdfs  2017-04-06 01:16:16,171 info org.apache.hadoop.http.httpserver2: added filter static_user_filter (class=org.apache.hadoop.http.lib.staticuserwebfilter$staticuserfilter) context logs  2017-04-06 01:16:16,171 info org.apache.hadoop.http.httpserver2: added filter static_user_filter (class=org.apache.hadoop.http.lib.staticuserwebfilter$staticuserfilter) context static  2017-04-06 01:16:16,300 info org.apache.hadoop.http.httpserver2: added filter 'org.apache.hadoop.hdfs.web.authfilter' (class=org.apache.hadoop.hdfs.web.authfilter)  2017-04-06 01:16:16,303 info org.apache.hadoop.http.httpserver2: addjerseyresourcepackage: packagename=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathspec=/webhdfs/v1/*  2017-04-06 01:16:16,330 info org.apache.hadoop.http.httpserver2: jetty bound port 50070  2017-04-06 01:16:16,330 info org.mortbay.log: jetty-6.1.26  2017-04-06 01:16:16,581 info org.mortbay.log: started httpserver2$selectchannelconnectorwithsafestartup@localhost:50070  2017-04-06 01:16:16,612 warn org.apache.hadoop.hdfs.server.common.util: path /usr/local/hadoop-2.7.3/data/dfs/namenode should specified uri in configuration files. please update hdfs configuration.  2017-04-06 01:16:16,612 warn org.apache.hadoop.hdfs.server.common.util: path /usr/local/hadoop-2.7.3/data/dfs/namenode should specified uri in configuration files. please update hdfs configuration.  2017-04-06 01:16:16,613 warn org.apache.hadoop.hdfs.server.namenode.fsnamesystem: 1 image storage directory (dfs.namenode.name.dir) configured. beware of data loss due lack of redundant storage directories!  2017-04-06 01:16:16,613 warn org.apache.hadoop.hdfs.server.namenode.fsnamesystem: 1 namespace edits storage directory (dfs.namenode.edits.dir) configured. beware of data loss due lack of redundant storage directories!  2017-04-06 01:16:16,617 warn org.apache.hadoop.hdfs.server.common.util: path /usr/local/hadoop-2.7.3/data/dfs/namenode should specified uri in configuration files. please update hdfs configuration.  2017-04-06 01:16:16,617 warn org.apache.hadoop.hdfs.server.common.util: path /usr/local/hadoop-2.7.3/data/dfs/namenode should specified uri in configuration files. please update hdfs configuration.  2017-04-06 01:16:16,639 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: no keyprovider found.  2017-04-06 01:16:16,639 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: fslock fair:true  2017-04-06 01:16:16,668 info org.apache.hadoop.hdfs.server.blockmanagement.datanodemanager: dfs.block.invalidate.limit=1000  2017-04-06 01:16:16,668 info org.apache.hadoop.hdfs.server.blockmanagement.datanodemanager: dfs.namenode.datanode.registration.ip-hostname-check=true  2017-04-06 01:16:16,669 info org.apache.hadoop.hdfs.server.blockmanagement.blockmanager: dfs.namenode.startup.delay.block.deletion.sec set 000:00:00:00.000  2017-04-06 01:16:16,669 info org.apache.hadoop.hdfs.server.blockmanagement.blockmanager: block deletion start around 2017 apr 06 01:16:16  2017-04-06 01:16:16,670 info org.apache.hadoop.util.gset: computing capacity map blocksmap  2017-04-06 01:16:16,670 info org.apache.hadoop.util.gset: vm type       = 64-bit  2017-04-06 01:16:16,671 info org.apache.hadoop.util.gset: 2.0% max memory 966.7 mb = 19.3 mb  2017-04-06 01:16:16,671 info org.apache.hadoop.util.gset: capacity      = 2^21 = 2097152 entries  2017-04-06 01:16:16,690 info org.apache.hadoop.hdfs.server.blockmanagement.blockmanager: dfs.block.access.token.enable=false  2017-04-06 01:16:16,691 info org.apache.hadoop.hdfs.server.blockmanagement.blockmanager: defaultreplication         = 1  2017-04-06 01:16:16,691 info org.apache.hadoop.hdfs.server.blockmanagement.blockmanager: maxreplication             = 512  2017-04-06 01:16:16,691 info org.apache.hadoop.hdfs.server.blockmanagement.blockmanager: minreplication             = 1  2017-04-06 01:16:16,691 info org.apache.hadoop.hdfs.server.blockmanagement.blockmanager: maxreplicationstreams      = 2  2017-04-06 01:16:16,691 info org.apache.hadoop.hdfs.server.blockmanagement.blockmanager: replicationrecheckinterval = 3000  2017-04-06 01:16:16,691 info org.apache.hadoop.hdfs.server.blockmanagement.blockmanager: encryptdatatransfer        = false  2017-04-06 01:16:16,691 info org.apache.hadoop.hdfs.server.blockmanagement.blockmanager: maxnumblockstolog          = 1000  2017-04-06 01:16:16,706 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: fsowner             = root (auth:simple)  2017-04-06 01:16:16,707 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: supergroup          = supergroup  2017-04-06 01:16:16,707 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: ispermissionenabled = true  2017-04-06 01:16:16,707 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: ha enabled: false  2017-04-06 01:16:16,708 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: append enabled: true  2017-04-06 01:16:16,963 info org.apache.hadoop.util.gset: computing capacity map inodemap  2017-04-06 01:16:16,963 info org.apache.hadoop.util.gset: vm type       = 64-bit  2017-04-06 01:16:16,970 info org.apache.hadoop.util.gset: 1.0% max memory 966.7 mb = 9.7 mb  2017-04-06 01:16:16,970 info org.apache.hadoop.util.gset: capacity      = 2^20 = 1048576 entries  2017-04-06 01:16:16,971 info org.apache.hadoop.hdfs.server.namenode.fsdirectory: acls enabled? false  2017-04-06 01:16:16,971 info org.apache.hadoop.hdfs.server.namenode.fsdirectory: xattrs enabled? true  2017-04-06 01:16:16,971 info org.apache.hadoop.hdfs.server.namenode.fsdirectory: maximum size of xattr: 16384  2017-04-06 01:16:16,971 info org.apache.hadoop.hdfs.server.namenode.namenode: caching file names occuring more 10 times  2017-04-06 01:16:16,977 info org.apache.hadoop.util.gset: computing capacity map cachedblocks  2017-04-06 01:16:16,977 info org.apache.hadoop.util.gset: vm type       = 64-bit  2017-04-06 01:16:16,977 info org.apache.hadoop.util.gset: 0.25% max memory 966.7 mb = 2.4 mb  2017-04-06 01:16:16,977 info org.apache.hadoop.util.gset: capacity      = 2^18 = 262144 entries  2017-04-06 01:16:16,978 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033  2017-04-06 01:16:16,978 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: dfs.namenode.safemode.min.datanodes = 0  2017-04-06 01:16:16,978 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: dfs.namenode.safemode.extension     = 30000  2017-04-06 01:16:16,980 info org.apache.hadoop.hdfs.server.namenode.top.metrics.topmetrics: nntop conf: dfs.namenode.top.window.num.buckets = 10  2017-04-06 01:16:16,980 info org.apache.hadoop.hdfs.server.namenode.top.metrics.topmetrics: nntop conf: dfs.namenode.top.num.users = 10  2017-04-06 01:16:16,980 info org.apache.hadoop.hdfs.server.namenode.top.metrics.topmetrics: nntop conf: dfs.namenode.top.windows.minutes = 1,5,25  2017-04-06 01:16:16,983 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: retry cache on namenode enabled  2017-04-06 01:16:16,983 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: retry cache use 0.03 of total heap , retry cache entry expiry time 600000 millis  2017-04-06 01:16:16,984 info org.apache.hadoop.util.gset: computing capacity map namenoderetrycache  2017-04-06 01:16:16,984 info org.apache.hadoop.util.gset: vm type       = 64-bit  2017-04-06 01:16:16,984 info org.apache.hadoop.util.gset: 0.029999999329447746% max memory 966.7 mb = 297.0 kb  2017-04-06 01:16:16,984 info org.apache.hadoop.util.gset: capacity      = 2^15 = 32768 entries  2017-04-06 01:16:17,005 info org.apache.hadoop.hdfs.server.common.storage: lock on /usr/local/hadoop-2.7.3/data/dfs/namenode/in_use.lock acquired nodename 5360@localhost  2017-04-06 01:16:17,007 warn org.apache.hadoop.hdfs.server.namenode.fsnamesystem: encountered exception loading fsimage  java.io.ioexception: namenode not formatted.  	at org.apache.hadoop.hdfs.server.namenode.fsimage.recovertransitionread(fsimage.java:225)  	at org.apache.hadoop.hdfs.server.namenode.fsnamesystem.loadfsimage(fsnamesystem.java:975)  	at org.apache.hadoop.hdfs.server.namenode.fsnamesystem.loadfromdisk(fsnamesystem.java:681)  	at org.apache.hadoop.hdfs.server.namenode.namenode.loadnamesystem(namenode.java:585)  	at org.apache.hadoop.hdfs.server.namenode.namenode.initialize(namenode.java:645)  	at org.apache.hadoop.hdfs.server.namenode.namenode.<init>(namenode.java:812)  	at org.apache.hadoop.hdfs.server.namenode.namenode.<init>(namenode.java:796)  	at org.apache.hadoop.hdfs.server.namenode.namenode.createnamenode(namenode.java:1493)  	at org.apache.hadoop.hdfs.server.namenode.namenode.main(namenode.java:1559)  2017-04-06 01:16:17,032 info org.mortbay.log: stopped httpserver2$selectchannelconnectorwithsafestartup@localhost:50070  2017-04-06 01:16:17,035 warn org.apache.hadoop.http.httpserver2: httpserver acceptor: isrunning false. rechecking.  2017-04-06 01:16:17,035 warn org.apache.hadoop.http.httpserver2: httpserver acceptor: isrunning false  2017-04-06 01:16:17,035 info org.apache.hadoop.metrics2.impl.metricssystemimpl: stopping namenode metrics system...  2017-04-06 01:16:17,035 info org.apache.hadoop.metrics2.impl.metricssystemimpl: namenode metrics system stopped.  2017-04-06 01:16:17,035 info org.apache.hadoop.metrics2.impl.metricssystemimpl: namenode metrics system shutdown complete.  2017-04-06 01:16:17,035 error org.apache.hadoop.hdfs.server.namenode.namenode: failed start namenode.  java.io.ioexception: namenode not formatted.  	at org.apache.hadoop.hdfs.server.namenode.fsimage.recovertransitionread(fsimage.java:225)  	at org.apache.hadoop.hdfs.server.namenode.fsnamesystem.loadfsimage(fsnamesystem.java:975)  	at org.apache.hadoop.hdfs.server.namenode.fsnamesystem.loadfromdisk(fsnamesystem.java:681)  	at org.apache.hadoop.hdfs.server.namenode.namenode.loadnamesystem(namenode.java:585)  	at org.apache.hadoop.hdfs.server.namenode.namenode.initialize(namenode.java:645)  	at org.apache.hadoop.hdfs.server.namenode.namenode.<init>(namenode.java:812)  	at org.apache.hadoop.hdfs.server.namenode.namenode.<init>(namenode.java:796)  	at org.apache.hadoop.hdfs.server.namenode.namenode.createnamenode(namenode.java:1493)  	at org.apache.hadoop.hdfs.server.namenode.namenode.main(namenode.java:1559)  2017-04-06 01:16:17,036 info org.apache.hadoop.util.exitutil: exiting status 1  2017-04-06 01:16:17,040 info org.apache.hadoop.hdfs.server.namenode.namenode: shutdown_msg: 

why can see jps in normal account?

as have started daemons sudo, root user owns processes. command jps reports jvms has access permissions. normal account has no access processes owned root.

why namenode not started?

java.io.ioexception: namenode not formatted. 

namenode not yet formatted. possible have missed provide y when format command prompted (y/n).


Comments