The real advantage of CSV, in my mind, is that if the CSV is valid and normal then it's going to be a rectangular dataset (ignoring semantics within the dataset).
If I import JSON data I have no idea what shape the result will be in, and it requires a separate standard to let me know about columns and rows and validation can get complicated.
CSV is that way too. There's nothing that says each row has to have the same number of what maps out to columns, or anything that tells me what those columns really represent (there's no schema). You could use the first line of the CSV doc to say what each tuple is named, but that's not a standard or anything. And without a schema, it certainly could be easy to lose the metadata of the info the CSV file is trying to represent. Is this column just a bunch of numbers or a date format? (for example). CSV is OK for importing and exporting data across systems that know what the format is without the help of a schema, but anything else and you run into a pile of edge cases. Even using a CSV file to import into a spreadsheet works usually but context is often lost.
If I import JSON data I have no idea what shape the result will be in, and it requires a separate standard to let me know about columns and rows and validation can get complicated.