Reading and Writing Parquet file with nested datatype using Pyspark
$2-8 USD / hour
Completed
Posted over 2 years ago
$2-8 USD / hour
Please find the images attached
Read the parquet file line by line , column by column, each and every column value will be passed to another function will return some value, with that new value the string has to be replaced in the current column value and and write the records ( with changed values) to new parquet file......while writing we have to make sure that order of the records, schema structure everything should be same ( apart from changed values)
For ex: in the Sample [login to view URL] ,we see [login to view URL] for all old names James, Michael,Robert , Washington...
for old_name --> James , create a function by name transformer() and if we pass [login to view URL] ---> brown should replace with black,
for old_name --> Michael if we pass [login to view URL] ---> null should replace with black
the changes should be appear in the new new parquet file by name [login to view URL] with same schema structure , order of columns,order of records
Note:- sample data is just for input data, logic should be dynamic , parquet file schema will not be the same all the time.....our code should read the parquet file schema dynamically and and create the parquet file with changed data ( xxx) ....the rows, schema and columns should be same
Code Snippet for sample data
dataDictionary = [
('James',{'hair':'black','eye':'brown'}, ("James","","Smith")),
('Michael',{'hair':'brown','eye': None}, ("Michael","Rose","")),
('Robert',{'hair':'red','eye':'black'}, ("Robert","","Williams")),
('Washington',{'hair':'grey','eye':'grey'}, ("Maria","Anne","Jones"))
]
schema = StructType([
StructField('old_name', StringType(), True),
StructField('properties', MapType(StringType(),StringType()),True),
StructField('name', StructType([
StructField('firstname', StringType(), True),
StructField('middlename', StringType(), True),
StructField('lastname', StringType(), True)
]))
])
Sample data screen shot has the sample data
Sample schema screen shot has the schema details
Hello,
When viewing you job details, it really hooked me because 've so much experience in this area.
With solid experience in data analysis and Microsft certifications in Data managment and analysis, Sql Server and business intelligence, python programming, pyspark, airflow and AWS Services i could be valuable for your project.
let's have 10 mn to discuss more details and get started right away
Best Regards
Hosni Mrizek
$7 USD in 20 days
0.0 (0 reviews)
0.0
0.0
2 freelancers are bidding on average $8 USD/hour for this job
Hi,
I am an experienced Data Engineer with a solid background in Spark.
I have worked on many projects with Spark, Scala, Python, Cassandra, Snowflake, AWS,...
Let's have a call for more details about the project.
Regards