正文
首先创建表
导入数据及查询
其他注意事项
总结
正文现有文件为csv格式,需要导入hive中,设csv内容如下
1001,zs,23
1002,lis,24
首先创建表
create table if not exists csv2(
uid int,
uname string,
age int
)
row format serde 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
stored as textfile ;
导入数据及查询
load data local inpath '/data/csv2.csv' into table csv2;
select * from csv2;
其他注意事项
如果建表是parquet格式可否load导入csv文件?
drop table csv2;
create table if not exists csv2(
uid int,
uname string,
age int
)
row format serde 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
stored as parquet ;
load data local inpath '/data/csv2.csv' into table csv2;
select * from csv2;
使用时会报错
Failed with exception java.io.IOException:java.lang.RuntimeException: hdfs://192.168.10.101:8020/user/hive/warehouse/csv2/csv2.csv is not a Parquet file. expected magic number at tail [80, 65, 82, 49] but found [44, 50, 52, 10]
**不可以,需要先导入成textfile,之后再从临时表导入成parquet,**如下
drop table csv2;
create table if not exists csv2
(
uid int,
uname string,
age int
)
row format serde 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
stored as textfile;
-- 先导入csv文件到表格csv2,保存格式是textfile
load data local inpath '/data/csv2.csv' into table csv2;
drop table csv3;
-- 创建csv3,保存格式parquet
create table if not exists csv3
(
uid int,
uname string,
age int
)
row format delimited
fields terminated by ','
stored as parquet;
-- 提取csv2的数据插入到csv3
insert overwrite table csv3 select * from csv2;
总结
关键是要引入org.apache.hadoop.hive.serde2.OpenCSVSerde
csv
要保存到hive
的parquet
,需要先保存成textfile
以上就是Hive导入csv文件示例的详细内容,更多关于Hive导入csv文件的资料请关注易知道(ezd.cc)其它相关文章!