Disable yarn YARN RM returned a failed response

I am running into below error
YARN RM returned a failed response: HTTPConnectionPool(host=‘localhost’, port=8088): Max retries exceeded with url: /ws/v1/cluster/apps?doAs=test&user.name=hue&user=test&limit=500&startedTimeBegin=1634693938000 (Caused by NewConnectionError(’: Failed to establish a new connection: [Errno 111] Connection refused’))

I do not have an hadoop cluster, Can i disable this some how? or may be just hide this error on the UI, I just need the apache livy job history for which the integration is completed but I keep seeing this error on UI, Please help!

Hi, maybe this doc section can help?

Hello Harsh,

I already have this setting

app_blacklist=security,impala,rdbms,jobsub,pig,hbase,sqoop,zookeeper,metastore,oozie,indexer,filebrowser

Just so you know i need jobbrowser and spark, so i removed them from the list, but still shows same error, Please let me know if there is a different way i could disable hadoop cluster

Thanks

Looks like it is not able to find the yarn resource manager at the localhost. Since you cant disable jobbrowser, then can you try if there is any value is set under [[yarn_clusters]] in the config file?

If yes and that is not required, then try removing them and see if it helps!

here is the applied config

[desktop]
allowed_hosts="*"
use_new_editor=true
secret_key=
app_blacklist=security,impala,rdbms,jobsub,pig,hbase,sqoop,zookeeper,metastore,oozie,indexer,filebrowser
django_debug_mode=false
enable_download=true
enable_link_sharing=true
[[auth]]
idle_session_timeout=-1
[[database]]
engine=postgresql_psycopg2
host=hue-db
port=5432
user=hue
password=
name=hue
[notebook]
show_notebooks=true
enable_presentation=true
[[interpreters]]
[[[postgresql]]]
name = postgresql
interface=sqlalchemy
options=’{“url”: “postgresql://hue:xxxxx@hue-db:5432/hue”}’
[[[sparksql]]]
name=SparkSql
interface=livy
[[[spark]]]
name=Scala
interface=livy
[[[jar]]]
name=Spark Submit Jar
interface=livy-batch
[dashboard]
is_enabled=true
use_gridster=true
[spark]
livy_server_url=http://livy
security_enabled=false
[jobbrowser]
share_jobs=true
disable_killing_jobs=false

Please let me know if you see any issues here and I do not have anything set under yarn_clusters, livy is using its default config for yarn_clusters