Hive support must be enabled to use this command. For Hive SerDe tables, Spark SQL respects the Hive-related configuration, including hive.exec.dynamic.partition and hive.exec.dynamic.partition.mode. The rootcause is hive get wrong input format in file merge stage Note that, like most Hadoop tools, Hive input is directory-based. Consequently, dropping of an external table does not affect the data. 2. sqlContext.sql(’show columns in mytable’) <—— good results The Hive query for this is as follows: insert overwrite directory wasb:///