Issue
I use Apache spark as an ETL tool to fetch tables from Oracle into Elasticsearch.
I face an issue with numeric columns that spark recognize them as decimal
whereas Elasticsearch doesn't accept decimal
type; so i convert each decimal
columns into double
which is accepted for Elasticsearch.
dataFrame = dataFrame.select(
[col(name) if 'decimal' not in colType else col(name).cast('double') for name, colType in dataFrame.dtypes]
)
The current issue that every numeric column will be double; either it has decimal value or not.
My question is there any way to detect column type should be converted into either integer type or double?
Solution
The solution was to check decimals number before determine the appropriate type.
I added a function to check and return data type:
def check(self, colType):
# you should import re before
# colType will be like decimal(15,0); so get these numbers
[digits, decimals] = re.findall(r'\d+', colType)
# if there's no decimal points, convert it to int
return 'int' if decimals == '0' else 'double'
Then i calls it for each column:
dataFrame = dataFrame.select(
[col(name) if 'decimal' not in colType else col(name).cast(self.check(colType)) for name, colType in dataFrame.dtypes]
)
Answered By - Nimer Awwad Answer Checked By - Katrina (PHPFixing Volunteer)
0 Comments:
Post a Comment
Note: Only a member of this blog may post a comment.