Skip to content

Generate a new ConflictQA-popQA dataset with other language model #4

@acDante

Description

@acDante

Hi, thanks for the great work! I would like to generate a new version of ConflictQA-popQA dataset using Llama3-8b model. Could you explain how to use your code base to generate the knowledge conflict data? What steps should I follow? I find the build_prompt functions for popQA in prompt_preparation.py do not seem to work for the original data/popQA.tsv dataset. Would you mind sharing the popQA data in json format?

I got the following error when running run.py:

Traceback (most recent call last):
  File "/mnt/ceph_rbd/LLM-Knowledge-Conflict/code/run.py", line 19, in <module>
    test_data = build_zeroshot_prompt_popQA(args.input_file, model_name=args.model_name)
  File "/mnt/ceph_rbd/LLM-Knowledge-Conflict/code/prompt_preparation.py", line 64, in build_zeroshot_prompt_popQA
    unit = json.loads(line)
  File "/mnt/ceph_rbd/miniconda3/envs/kc/lib/python3.10/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "/mnt/ceph_rbd/miniconda3/envs/kc/lib/python3.10/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/mnt/ceph_rbd/miniconda3/envs/kc/lib/python3.10/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions