spark读取hive表字段,区分大小写问题
- 手机
- 2025-08-16 23:00:02

背景
spark任务读取hive表,查询字段为小写,但Hive表字段为大写,无法读取数据
问题错误: 如何解决呢? In version 2.3 and earlier, when reading from a Parquet data source table, Spark always returns null for any column whose column names in Hive metastore schema and Parquet schema are in different letter cases, no matter whether spark.sql.caseSensitive is set to true or false. Since 2.4, when spark.sql.caseSensitive is set to false, Spark does case insensitive column name resolution between Hive metastore schema and Parquet schema, so even column names are in different letter cases, Spark returns corresponding column values. An exception is thrown if there is ambiguity, i.e. more than one Parquet column is matched. This change also applies to Parquet Hive tables when spark.sql.hive.convertMetastoreParquet is set to true. # 在程序或者sql中添加这个参数即可 set spark.sql.caseSensitive = false参考地址:
Migration Guide: SQL, Datasets and DataFrame - Spark 3.2.0 Documentation
spark读取hive表字段,区分大小写问题由讯客互联手机栏目发布,感谢您对讯客互联的认可,以及对我们原创作品以及文章的青睐,非常欢迎各位朋友分享到个人网站或者朋友圈,但转载请说明文章出处“spark读取hive表字段,区分大小写问题”