If the library prepared is the same, and you have run the same library twice with different depths, you can combine the data from both runs and claim the higher depth.
Depth of sequencing is not dependent on the data size or number of reads produced by merging different datasets. It is actually the data generated in individual run.
There are a few measures of depth you can report. First of all, I wouldn't try to give a sequencing depth, rather a coverage depth. But if you really want to do this, then sum up all of the generated NGS read lengths and divide by the sequenced genome length.
A better approach would be to provide an average coverage value: sum up all the mapped bases from NGS reads and divide by the length of the genome.
Even better would be to quantify with a density plot, which shows a function of the fraction of genome covered at a particular depth (attached example).
Given the library prep method is same, one could sequence a sample as many times they want, then merge the data either at fastq level or at bam level (i.e. right after the alignment step) and then conduct the following analysis. Infact, this is the only way to have higher depths for your target regions in an exome sequencing analysis.
Note: The cleanest way is to merge at bam level where you could specify read-groups for the bams coming from different sequencing runs. That way read-group covariates could get accounted for during GATK's base recalibration step.