Pyspark and spark not working in apache hue..help

I want to know the cause.

os : ubuntu 20.4
heu version : 4.10.0
livy version : 0.8.0
spark version : 3.3.0
hadoop version : 3.3.4
hive version : 3.1.3

After installing livy to use pyspark, I checked the operation of pyspark using curl.
And in apache hue -> pyspark, print(1+1) works fine.
However, the code below and all pyspark commands do not work.

import random

NUM_SAMPLES = 100000

def sample§:
x, y = random.random(), random.random()
return 1 if xx + yy < 1 else 0

count = sc.parallelize(range(0, NUM_SAMPLES)).map(sample).reduce(lambda a, b: a + b)
print(count)
When the above sample program is executed, the following error message is displayed.

[07/Oct/2022 15:24:36 +0900] decorators ERROR Error running fetch_result_data
Traceback (most recent call last):
File “/home/hue/hue-4.10.0/desktop/libs/notebook/src/notebook/decorators.py”, line 119, in wrapper
return f(*args, **kwargs)
File “/home/hue/hue-4.10.0/desktop/libs/notebook/src/notebook/api.py”, line 329, in fetch_result_data
response = _fetch_result_data(request, notebook, snippet, operation_id, rows=rows, start_over=start_over)
File “/home/hue/hue-4.10.0/desktop/libs/notebook/src/notebook/api.py”, line 339, in _fetch_result_data
‘result’: get_api(request, snippet).fetch_result(notebook, snippet, rows, start_over)
File “/home/hue/hue-4.10.0/desktop/libs/notebook/src/notebook/connectors/spark_shell.py”, line 235, in fetch_result
raise QueryError(msg)
notebook.connectors.base.QueryError: Traceback (most recent call last):
File “/tmp/3309927620969108702”, line 223, in execute
code = compile(mod, ‘’, ‘exec’)
TypeError: required field “type_ignores” missing from Module
And when I do curl like below, I get the same error.

curl localhost:8998/sessions/11/statements -X POST -H ‘Content-Type: application/json’ -d ‘{“code”:“import random\n\nNUM_SAMPLES = 100000\n\ndef sample§:\n x, y = random.random(), random.random()\n return 1 if xx + yy < 1 else 0\n\ncount = sc.parallelize(range(0, NUM_SAMPLES)).map(sample).reduce(lambda a, b: a + b)\nprint(count)”}’
livy@bigdata:~$ curl localhost:8998/sessions/11/statements/1
{“id”:1,“code”:"import random\n\nNUM_SAMPLES = 100000\n\ndef sample§:\n x,