This message generated a parse failure. Raw output follows here. Please use 'back' to navigate. From devnull@lkml.org Fri Jun 7 02:00:36 2024 Received: from lml.valinux.com (postfix@lml.valinux.com [198.186.203.19]) by herbie.ucs.indiana.edu (8.9.3/8.9.3) with ESMTP id DAA05413 for ; Thu, 3 Feb 2000 03:15:45 -0500 (EST) Received: from vger.rutgers.edu (vger.rutgers.edu [128.6.190.2]) by lml.valinux.com (Postfix) with ESMTP id 15B205CADD; Thu, 3 Feb 2000 00:14:06 -0800 (PST) Received: by vger.rutgers.edu via listexpand id ; Wed, 2 Feb 2000 22:51:41 -0500 Received: by vger.rutgers.edu id ; Wed, 2 Feb 2000 22:51:05 -0500 Received: from sunsite.ms.mff.cuni.cz ([195.113.19.66]:1774 "EHLO sunsite.ms.mff.cuni.cz") by vger.rutgers.edu with ESMTP id ; Wed, 2 Feb 2000 22:49:05 -0500 Received: (from jj@localhost) by sunsite.ms.mff.cuni.cz (8.9.3/8.9.3) id JAA25355; Thu, 3 Feb 2000 09:05:45 +0100 Date: Thu, 3 Feb 2000 09:05:45 +0100 From: Jakub Jelinek To: Aaron Tiensivu Cc: Richard Henderson , Linux Kernel Subject: Re: [2.3.42] Doesn't compile on Alpha Message-Id: <20000203090545.V1909@mff.cuni.cz> Mail-Followup-To: Aaron Tiensivu , Richard Henderson , Linux Kernel References: <20000202205029.A6785@ctechnix.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="TB36FDmn/VVEgNH/" X-Mailer: Mutt 0.95.4us In-Reply-To: <20000202205029.A6785@ctechnix.com>; from Aaron Tiensivu on Wed, Feb 02, 2000 at 08:50:29PM -0500 Sender: owner-linux-kernel@vger.rutgers.edu Precedence: bulk X-Loop: majordomo@vger.rutgers.edu X-Orcpt: rfc822;linux-kernel-outgoing-dig --TB36FDmn/VVEgNH/ Content-Type: text/plain; charset=us-ascii On Wed, Feb 02, 2000 at 08:50:29PM -0500, Aaron Tiensivu wrote: > [root@multiameal /usr/src/linux-2.3/linux]# make > gcc -D__KERNEL__ -I/usr/src/linux-2.3/linux/include -O2 -fomit-frame-pointer -fn > o-strict-aliasing -pipe -mno-fp-regs -ffixed-8 -mcpu=ev4 -c -o init/main.o ini > t/main.c > In file included from init/main.c:33: > /usr/src/linux-2.3/linux/include/linux/pci.h:318: parse error before `dma_addr_t > ' > /usr/src/linux-2.3/linux/include/linux/pci.h:318: warning: no semicolon at end o > f struct or union > /usr/src/linux-2.3/linux/include/linux/pci.h:346: parse error before `}' > make: *** [init/main.o] Error 1 > > > Rather odd since the same tree will build for i386 ok. > I'm guessing the newer PCI code hasn't been synced for Alpha yet? > I can provide .config if need be. Try this. It is Richard Henderson's WIP patch for dynamic DMA I've hacked for what Linus finally accepted into 2.3.41 and hopefully with all the NEW_PCI_DMA_MAP stuff protected by an ifdef of the same name, so if I haven't forgotten anything (don't have an Alpha nor cross-compiler set up), it should be working for you if you don't have more than 4G (or what was the limit on Alpha) of physical memory. Please tell me if the patch does not compile. Once this Alpha code is finished and NEW_PCI_DMA_MAP enabled and working, this will remove physical memory limits on Alpha (like it did on UltraSPARC where we don't pose any physical RAM limits in 2.3.42 any more, so if you want on sparc64 128GB of RAM, just prepare enough bucks for a big machine) without ugly hacks like bigmem/Alpha. All drivers you use on such system have to be converted to the new DMA mapping API though (in vger CVS we have converted all drivers which are available as config options for sparc64 and will be working on merging it to Linus' tree through maintainers). Cheers, Jakub ___________________________________________________________________ Jakub Jelinek | jakub@redhat.com | http://sunsite.mff.cuni.cz/~jj Linux version 2.3.41 on a sparc64 machine (1343.49 BogoMips) ___________________________________________________________________ --TB36FDmn/VVEgNH/ Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="dynamic-dma-alpha.patch" --- linux/arch/alpha/kernel/Makefile.jj Mon Dec 20 09:09:50 1999 +++ linux/arch/alpha/kernel/Makefile Thu Feb 3 08:12:24 2000 @@ -30,7 +30,7 @@ O_OBJS += core_apecs.o core_cia.o core_ else ifdef CONFIG_PCI -O_OBJS += pci.o +O_OBJS += pci.o pci_iommu.o endif # Core logic support --- linux/arch/alpha/kernel/core_cia.c.jj Mon Dec 20 09:09:54 1999 +++ linux/arch/alpha/kernel/core_cia.c Thu Feb 3 08:12:24 2000 @@ -314,12 +314,21 @@ struct pci_ops cia_pci_ops = write_dword: cia_write_config_dword }; +void +cia_pci_tbi(struct pci_controler *hose, dma_addr_t start, dma_addr_t end) +{ + wmb(); + *(vip)CIA_IOC_PCI_TBIA = 3; /* Flush all locked and unlocked. */ + mb(); + *(vip)CIA_IOC_PCI_TBIA; /* Re-read to force write. */ +} + void __init cia_init_arch(void) { struct pci_controler *hose; struct resource *hae_mem; - unsigned int temp; + unsigned int temp; #if DEBUG_DUMP_REGS temp = *(vuip)CIA_IOC_CIA_REV; mb(); @@ -368,7 +377,7 @@ cia_init_arch(void) printk("cia_init: W3_BASE was 0x%x\n", temp); #endif /* DEBUG_DUMP_REGS */ - /* + /* * Set up error reporting. */ temp = *(vuip)CIA_IOC_CIA_ERR; @@ -382,6 +391,55 @@ cia_init_arch(void) mb(); /* + * Create our single hose. + */ + + pci_isa_hose = hose = alloc_pci_controler(); + hae_mem = alloc_resource(); + + hose->io_space = &ioport_resource; + hose->mem_space = hae_mem; + hose->config_space = CIA_CONF; + hose->index = 0; + + hae_mem->start = 0; + hae_mem->end = CIA_MEM_R1_MASK; + hae_mem->name = pci_hae0_name; + hae_mem->flags = IORESOURCE_MEM; + + if (request_resource(&iomem_resource, hae_mem) < 0) + printk(KERN_ERR "Failed to request HAE_MEM\n"); + +#ifdef NEW_PCI_DMA_MAP + /* + * Set up the PCI to main memory translation windows. + * + * Window 0 is scatter-gather 8MB at 8MB (for isa) + * Window 1 is direct access 1GB at 1GB + * Window 2 is scatter-gather 128MB at 2GB + * ??? We ought to scale this last with memory. + */ + hose->sg_isa = iommu_arena_new(8*1024*1024, 8*1024*1024); + hose->sg_pci = iommu_arena_new(2048u*1024*1024, 128*1024*1024); +#if 1 + hose->direct_map_base = 1024*1024*1024; + hose->direct_map_size = 1024*1024*1024; +#endif + + *(vuip)CIA_IOC_PCI_W0_BASE = hose->sg_isa->dma_base | 3; + *(vuip)CIA_IOC_PCI_W0_MASK = (hose->sg_isa->size - 1) & 0xfff00000; + *(vuip)CIA_IOC_PCI_T0_BASE = virt_to_phys(hose->sg_isa->ptes) >> 2; + *(vuip)CIA_IOC_PCI_W1_BASE = hose->direct_map_base | 1; + *(vuip)CIA_IOC_PCI_W1_MASK = (hose->direct_map_size - 1) & 0xfff00000; + *(vuip)CIA_IOC_PCI_T1_BASE = 0; + + *(vuip)CIA_IOC_PCI_W2_BASE = hose->sg_pci->dma_base | 3; + *(vuip)CIA_IOC_PCI_W2_MASK = (hose->sg_pci->size - 1) & 0xfff00000; + *(vuip)CIA_IOC_PCI_T2_BASE = virt_to_phys(hose->sg_pci->ptes) >> 2; + + *(vuip)CIA_IOC_PCI_W3_BASE = 0; +#else + /* * Set up the PCI->physical memory translation windows. * For now, windows 2 and 3 are disabled. In the future, * we may want to use them to do scatter/gather DMA. @@ -402,14 +460,15 @@ cia_init_arch(void) *(vuip)CIA_IOC_PCI_W2_BASE = 0x0; *(vuip)CIA_IOC_PCI_W3_BASE = 0x0; +#endif mb(); - /* - * Next, clear the CIA_CFG register, which gets used - * for PCI Config Space accesses. That is the way - * we want to use it, and we do not want to depend on - * what ARC or SRM might have left behind... - */ + /* + * Next, clear the CIA_CFG register, which gets used + * for PCI Config Space accesses. That is the way + * we want to use it, and we do not want to depend on + * what ARC or SRM might have left behind... + */ *((vuip)CIA_IOC_CFG) = 0; mb(); /* @@ -419,26 +478,6 @@ cia_init_arch(void) *((vuip)CIA_IOC_HAE_MEM); /* read it back. */ *((vuip)CIA_IOC_HAE_IO) = 0; mb(); *((vuip)CIA_IOC_HAE_IO); /* read it back. */ - - /* - * Create our single hose. - */ - - hose = alloc_pci_controler(); - hae_mem = alloc_resource(); - - hose->io_space = &ioport_resource; - hose->mem_space = hae_mem; - hose->config_space = CIA_CONF; - hose->index = 0; - - hae_mem->start = 0; - hae_mem->end = CIA_MEM_R1_MASK; - hae_mem->name = pci_hae0_name; - hae_mem->flags = IORESOURCE_MEM; - - if (request_resource(&iomem_resource, hae_mem) < 0) - printk(KERN_ERR "Failed to request HAE_MEM\n"); } static inline void @@ -456,6 +495,8 @@ void cia_machine_check(unsigned long vector, unsigned long la_ptr, struct pt_regs * regs) { + int expected; + /* Clear the error before any reporting. */ mb(); mb(); /* magic */ @@ -464,5 +505,22 @@ cia_machine_check(unsigned long vector, wrmces(rdmces()); /* reset machine check pending flag. */ mb(); - process_mcheck_info(vector, la_ptr, regs, "CIA", mcheck_expected(0)); + expected = mcheck_expected(0); + process_mcheck_info(vector, la_ptr, regs, "CIA", expected); + + if (!expected && vector == 0x660) { + struct el_common *com; + struct el_common_EV5_uncorrectable_mcheck *ev5; + struct el_CIA_sysdata_mcheck *cia; + + com = (void *)la_ptr; + ev5 = (void *)(la_ptr + com->proc_offset); + cia = (void *)(la_ptr + com->sys_offset); + + if (com->code == 0x202) { + printk(KERN_CRIT "CIA pci err0=%016lx " + "err1=%016lx err2=%016lx\n", + cia->pci_err0, cia->pci_err1, cia->pci_err2); + } + } } --- linux/arch/alpha/kernel/irq.c.jj Mon Dec 20 09:10:08 1999 +++ linux/arch/alpha/kernel/irq.c Thu Feb 3 08:12:24 2000 @@ -897,6 +897,7 @@ process_mcheck_info(unsigned long vector case 0x98: reason = "processor detected hard error"; break; /* System specific (these are for Alcor, at least): */ + case 0x202: reason = "system detected hard error"; break; case 0x203: reason = "system detected uncorrectable ECC error"; break; case 0x204: reason = "SIO SERR occurred on PCI bus"; break; case 0x205: reason = "parity error detected by CIA"; break; --- linux/arch/alpha/kernel/machvec_impl.h.jj Mon Dec 20 09:10:10 1999 +++ linux/arch/alpha/kernel/machvec_impl.h Thu Feb 3 08:24:55 2000 @@ -100,9 +100,16 @@ #define DO_T2_IO IO(T2,t2) #define DO_TSUNAMI_IO IO(TSUNAMI,tsunami) +#ifdef NEW_PCI_DMA_MAP +#define BUS(which) \ + mv_virt_to_bus: CAT(which,_virt_to_bus), \ + mv_bus_to_virt: CAT(which,_bus_to_virt), \ + mv_pci_tbi: CAT(which,_pci_tbi) +#else #define BUS(which) \ mv_virt_to_bus: CAT(which,_virt_to_bus), \ mv_bus_to_virt: CAT(which,_bus_to_virt) +#endif #define DO_APECS_BUS BUS(apecs) #define DO_CIA_BUS BUS(cia) --- linux/arch/alpha/kernel/pci.c.jj Fri Jan 14 08:29:40 2000 +++ linux/arch/alpha/kernel/pci.c Thu Feb 3 08:12:24 2000 @@ -40,6 +40,7 @@ const char pci_hae0_name[] = "HAE0"; */ struct pci_controler *hose_head, **hose_tail = &hose_head; +struct pci_controler *pci_isa_hose; /* * Quirks. @@ -327,7 +328,7 @@ pcibios_fixup_pbus_ranges(struct pci_bus int __init pcibios_enable_device(struct pci_dev *dev) { - /* Not needed, since we enable all devices at startup. */ + /* What in the world is this supposed to do? */ return 0; } --- linux/arch/alpha/kernel/pci_impl.h.jj Thu Dec 16 07:37:57 1999 +++ linux/arch/alpha/kernel/pci_impl.h Thu Feb 3 08:12:24 2000 @@ -7,7 +7,7 @@ struct pci_dev; struct pci_controler; - +struct iommu_arena; /* * We can't just blindly use 64K for machines with EISA busses; they @@ -125,11 +125,14 @@ static inline u8 bridge_swizzle(u8 pin, /* The hose list. */ extern struct pci_controler *hose_head, **hose_tail; +extern struct pci_controler *pci_isa_hose; extern void common_init_pci(void); extern u8 common_swizzle(struct pci_dev *, u8 *); extern struct pci_controler *alloc_pci_controler(void); extern struct resource *alloc_resource(void); + +extern struct iommu_arena *iommu_arena_new(dma_addr_t, unsigned long); extern const char *const pci_io_names[]; extern const char *const pci_mem_names[]; --- linux/arch/alpha/kernel/pci_iommu.c.jj Thu Feb 3 08:12:24 2000 +++ linux/arch/alpha/kernel/pci_iommu.c Thu Feb 3 08:38:44 2000 @@ -0,0 +1,479 @@ +/* + * linux/arch/alpha/kernel/pci_iommu.c + */ + +#include +#include +#include +#include +#include + +#include +#include + +#include "pci_impl.h" + + +/* #define DEBUG_ALLOC 1 */ + +#if DEBUG_ALLOC +# define DBGA(args) printk args +#else +# define DBGA(args) +#endif + + +static inline unsigned long +mk_iommu_pte(unsigned long paddr) +{ + return (paddr >> (PAGE_SHIFT-1)) | 1; +} + +static inline long +calc_npages(long bytes) +{ + return (bytes + PAGE_SIZE - 1) >> PAGE_SHIFT; +} + + +static inline unsigned long +calc_order(unsigned long npages) +{ + unsigned long order; +#if defined(__alpha_cix__) && defined(__alpha_fix__) + asm("ctlz %1,%0" : "=r"(order) : "r"(npages)); + order += ((npages & -npages) != npages); +#else + for (order = 0; 1UL << order < npages; ++order) + continue; +#endif + return order; +} + + +struct iommu_arena * +iommu_arena_new(dma_addr_t base, unsigned long window_size) +{ + unsigned long entries, mem_size, mem_pages; + struct iommu_arena *arena; + + entries = window_size >> PAGE_SHIFT; + mem_size = entries * sizeof(unsigned long); + mem_pages = calc_npages(mem_size); + + arena = alloc_bootmem(sizeof(*arena)); + arena->ptes = alloc_bootmem_pages(mem_pages * PAGE_SIZE); + + spin_lock_init(&arena->lock); + arena->dma_base = base; + arena->size = window_size; + arena->alloc_hint = 0; + + return arena; +} + + +static long +iommu_arena_alloc(struct iommu_arena *arena, long n) +{ + unsigned long flags; + unsigned long *beg, *p, *end; + long i; + + spin_lock_irqsave(&arena->lock, flags); + + /* Search forward for the first sequence of N empty ptes. */ + beg = arena->ptes; + end = beg + (arena->size >> PAGE_SHIFT); + p = beg + arena->alloc_hint; + i = 0; + while (i < n && p < end) + i = (*p++ == 0 ? i + 1 : 0); + + if (p >= end) { + /* Failure. Assume the hint was wrong and go back to + search from the beginning. */ + p = beg; + i = 0; + while (i < n && p < end) + i = (*p++ == 0 ? i + 1 : 0); + + if (p >= end) { + spin_unlock_irqrestore(&arena->lock, flags); + return -1; + } + } + + /* Success. Mark them all in use, ie not zero. Typically + bit zero is the valid bit, so write ~1 into everything. + The chip specific bits will fill this in with something + kosher when we return. */ + for (p = p - n, i = 0; i < n; ++i) + p[i] = ~1UL; + + arena->alloc_hint = p - beg + n; + spin_unlock_irqrestore(&arena->lock, flags); + + return p - beg; +} + + +static void +iommu_arena_free(struct iommu_arena *arena, long ofs, long n) +{ + unsigned long *p; + long i; + + p = arena->ptes + ofs; + for (i = 0; i < n; ++i) + p[i] = 0; + arena->alloc_hint = ofs; +} + +/* Map a single buffer of the indicate size for PCI DMA in streaming + mode. The 32-bit PCI bus mastering address to use is returned. + Once the device is given the dma address, the device owns this memory + until either pci_unmap_single or pci_sync_single is performed. */ + +dma_addr_t +pci_map_single(struct pci_dev *pdev, void *cpu_addr, size_t size) +{ +#ifdef NEW_PCI_DMA_MAP + struct pci_controler *hose = pdev ? pdev->sysdata : pci_isa_hose; + dma_addr_t max_dma = pdev ? pdev->dma_mask : 0x00ffffff; + long npages, order; + unsigned long paddr; + dma_addr_t ret; + + paddr = virt_to_phys(cpu_addr); + npages = calc_npages((paddr & ~PAGE_MASK) + size); + order = calc_order(npages); + + /* First check to see if we can use the direct map window. */ + if (paddr + size + hose->direct_map_base - 1 <= max_dma + && paddr + size <= hose->direct_map_size) { + ret = paddr + hose->direct_map_base; + DBGA(("dma_map_single: [%p,%lx] -> direct %x from %p\n", + cpu_addr, size, ret, __builtin_return_address(0))); + } else { + struct iommu_arena *arena; + long dma_ofs, i; + + /* If the machine doesn't define a pci_tbi routine, + we have to assume it doesn't support sg mapping. */ + if (! alpha_mv.mv_pci_tbi) + return 0; + + arena = hose->sg_pci; + if (!arena + || arena->dma_base + arena->size > max_dma) + arena = hose->sg_isa; + + dma_ofs = iommu_arena_alloc(arena, npages); + if (dma_ofs < 0) + return 0; + + paddr &= PAGE_MASK; + for (i = 0; i < npages; ++i, paddr += PAGE_SIZE) { + arena->ptes[i + dma_ofs] = mk_iommu_pte(paddr); + DBGA(("DMS: %x -> %lx\n", + arena->dma_base + ((dma_ofs + i) << PAGE_SHIFT), + arena->ptes[i + dma_ofs])); + } + + ret = arena->dma_base + dma_ofs * PAGE_SIZE; + ret += (unsigned long)cpu_addr & ~PAGE_MASK; + + /* ??? This shouldn't have been needed, since the entries + we've just modified were not in the iommu tlb. */ + alpha_mv.mv_pci_tbi(hose, ret, ret + size - 1); + + DBGA(("dma_map_single: [%p,%lx] np %ld -> sg %x from %p\n", + cpu_addr, size, npages, ret, + __builtin_return_address(0))); + } + + return ret; +#else + return virt_to_bus(cpu_addr); +#endif +} + + +/* Unmap a single streaming mode DMA translation. The DMA_ADDR and + SIZE must match what was provided for in a previous pci_map_single + call. All other usages are undefined. After this call, reads by + the cpu to the buffer are guarenteed to see whatever the device + wrote there. */ + +void +pci_unmap_single(struct pci_dev *pdev, dma_addr_t dma_addr, size_t size) +{ +#ifdef NEW_PCI_DMA_MAP + struct pci_controler *hose = pdev ? pdev->sysdata : pci_isa_hose; + long npages, order; + + npages = calc_npages((dma_addr & ~PAGE_MASK) + size); + order = calc_order(npages); + + if (dma_addr >= hose->direct_map_base + && (dma_addr + size + <= hose->direct_map_base + hose->direct_map_size)) { + /* Nothing to do. */ + DBGA(("dma_unmap_single: direct [%x,%lx] from %p\n", + dma_addr, size, __builtin_return_address(0))); + } else { + struct iommu_arena *arena; + long dma_ofs; + + arena = hose->sg_pci; + if (!arena || dma_addr < arena->dma_base) + arena = hose->sg_isa; + + dma_ofs = (dma_addr - arena->dma_base) >> PAGE_SHIFT; + if (dma_ofs * PAGE_SIZE >= arena->size) { + printk(KERN_ERR "Bogus dma_unmap_single: dma_addr %x " + " base %x size %x\n", dma_addr, arena->dma_base, + arena->size); + return; + BUG(); + } + + iommu_arena_free(arena, dma_ofs, npages); + alpha_mv.mv_pci_tbi(hose, dma_addr, dma_addr + size - 1); + + DBGA(("dma_unmap_single: sg [%x,%lx] np %ld from %p\n", + dma_addr, size, npages, __builtin_return_address(0))); + } +#endif +} + + +/* Allocate and map kernel buffer using consistent mode DMA for PCI + device. Returns non-NULL cpu-view pointer to the buffer if + successful and sets *DMA_ADDRP to the pci side dma address as well, + else DMA_ADDRP is undefined. */ + +void * +pci_alloc_consistent(struct pci_dev *pdev, size_t size, dma_addr_t *dma_addrp) +{ + void *cpu_addr; + + cpu_addr = kmalloc(size, GFP_ATOMIC); + if (! cpu_addr) { + printk("pci_alloc_consistent: kmalloc failed from %p\n", + __builtin_return_address(0)); + /* ??? Really atomic allocation? Otherwise we could play + with vmalloc and sg if we can't find contiguous memory. */ + return NULL; + } + memset(cpu_addr, 0, size); + + *dma_addrp = pci_map_single(pdev, cpu_addr, size); + if (*dma_addrp == 0) { + kfree_s(cpu_addr, size); + return NULL; + } + + DBGA(("pci_alloc_consistent: %lx -> [%p,%x] from %p\n", + size, cpu_addr, *dma_addrp, __builtin_return_address(0))); + + return cpu_addr; +} + + +/* Free and unmap a consistent DMA buffer. CPU_ADDR and DMA_ADDR must + be values that were returned from pci_alloc_consistent. SIZE must + be the same as what as passed into pci_alloc_consistent. + References to the memory and mappings assosciated with CPU_ADDR or + DMA_ADDR past this call are illegal. */ + +void +pci_free_consistent(struct pci_dev *pdev, size_t size, void *cpu_addr, + dma_addr_t dma_addr) +{ + pci_unmap_single(pdev, dma_addr, size); + kfree_s(cpu_addr, size); + + DBGA(("pci_free_consistent: [%x,%lx] from %p\n", + dma_addr, size, __builtin_return_address(0))); +} + + +/* Map a set of buffers described by scatterlist in streaming mode for + PCI DMA. This is the scather-gather version of the above + pci_map_single interface. Here the scatter gather list elements + are each tagged with the appropriate PCI dma address and length. + They are obtained via sg_dma_{address,length}(SG). */ + +static inline unsigned long +sg_prepare(struct scatterlist *sg_orig, int nents, int *pdma_nents) +{ + struct scatterlist *sg, *dma_sg; + long next_vaddr; + dma_addr_t dent_addr, dent_len; + + sg = dma_sg = sg_orig; + + next_vaddr = (unsigned long)sg->address + sg->length; + dent_addr = (unsigned long)sg->address & ~PAGE_MASK; + dent_len = sg->length; + + for (sg++; --nents >= 0; ++sg) { + unsigned long addr; + addr = (unsigned long) sg->address; + + /* For the purposes of DMA, we can combine if the + two addresses are virtually contiguous, or if they + both are on a page boundary. */ + if (! (next_vaddr == addr + || ((next_vaddr | addr) & ~PAGE_MASK) == 0)) { + /* Can't combine. */ + dma_sg->dma_address = dent_addr; + dma_sg->dma_length = dent_len; + dma_sg++; + + /* Round up the displacement to the next page. */ + dent_addr = (dent_addr + dent_len + PAGE_SIZE - 1); + dent_addr &= PAGE_MASK; + dent_len = 0; + } + dent_len += sg->length; + next_vaddr = addr + sg->length; + } + dma_sg->dma_address = dent_addr; + dma_sg->dma_length = dent_len; + dma_sg++; + + *pdma_nents = dma_sg - sg_orig; + + return (dent_addr + dent_len + PAGE_SIZE - 1) >> PAGE_SHIFT; +} + +static inline unsigned long * +sg_fill_span(unsigned long *pte, unsigned long start, unsigned long end) +{ + while (start < end) { + *pte++ = mk_iommu_pte(start); + start += PAGE_SIZE; + } + + return pte; +} + +static inline unsigned long * +sg_fill(unsigned long *pte, struct scatterlist *sg, int nents) +{ + unsigned long paddr, next_paddr; + + paddr = virt_to_phys(sg->address); + next_paddr = paddr + sg->length; + paddr &= PAGE_MASK; + + for (++sg; --nents >= 0; ++sg) { + unsigned long addr = virt_to_phys(sg->address); + + if (next_paddr != addr) { + pte = sg_fill_span(pte, paddr, next_paddr); + paddr = addr & PAGE_MASK; + } + next_paddr = addr + sg->length; + } + + return sg_fill_span(pte, paddr, next_paddr); +} + +int +pci_map_sg(struct pci_dev *pdev, struct scatterlist *sg, int nents) +{ + struct pci_controler *hose; + struct iommu_arena *arena; + dma_addr_t max_dma, dma_addr; + long npages, dma_ofs, i; + unsigned long *end_pte; + int dma_nents; + + if (! alpha_mv.mv_pci_tbi) { + for (i = 0; i < nents; ++i) { + sg[i].dma_address = virt_to_bus(sg[i].address); + sg[i].dma_length = sg[i].length; + } + return nents; + } + +#ifdef NEW_PCI_DMA_MAP + /* Fast path single entry scatterlists. */ + if (nents == 1) { + sg->dma_length = sg->length; + sg->dma_address = pci_map_single(pdev, sg->address, sg->length); + return sg->dma_address != 0; + } + + /* First, find out how many iommu page table entries we'll + consume. Also set up zero-biased offsets into the dma + region we'll be allocating. */ + + npages = sg_prepare(sg, nents, &dma_nents); + + /* Second, allocate the iommu ptes. */ + + hose = pdev ? pdev->sysdata : pci_isa_hose; + max_dma = pdev ? pdev->dma_mask : 0x00ffffff; + + arena = hose->sg_pci; + if (!arena || arena->dma_base + arena->size > max_dma) + arena = hose->sg_isa; + + dma_ofs = iommu_arena_alloc(arena, npages); + if (dma_ofs < 0) + return 0; + dma_addr = arena->dma_base + dma_ofs * PAGE_SIZE; + + /* Third, normalize the sg dma addresses. */ + + for (i = 0; i < dma_nents; ++i) + sg[i].dma_address += dma_addr; + + /* Fourth, fill in the iommu ptes. */ + + end_pte = sg_fill(arena->ptes + dma_ofs, sg, nents); + if (end_pte - (arena->ptes - dma_ofs) != npages) + BUG(); + + /* ??? This shouldn't have been needed, since the entries + we've just modified were not in the iommu tlb. */ + alpha_mv.mv_pci_tbi(hose, dma_addr, dma_addr + npages*PAGE_SIZE - 1); + + DBGA(("dma_map_sg: ne %d np %ld -> [%x,%d]\n", + nents, npages, dma_addr, dma_nents)); + + return dma_nents; +#else + BUG(); +#endif +} + + +/* Unmap a set of streaming mode DMA translations. Again, cpu read + rules concerning calls here are the same as for pci_unmap_single() + above. */ + +void +pci_unmap_sg(struct pci_dev *pdev, struct scatterlist *sg, int nents) +{ +#ifdef NEW_PCI_DMA_MAP + long size, i; + + if (! alpha_mv.mv_pci_tbi) + return; + + for (i = size = 0; i < nents; ++i) + size += (sg[i].dma_length + PAGE_SIZE - 1) & PAGE_MASK; + + DBGA(("dma_unmap_sg: [%x,%d] np %ld\n", + (dma_addr_t) (sg[0].dma_address & PAGE_MASK), + nents, calc_npages(size))); + + pci_unmap_single(pdev, sg[0].dma_address, size); +#endif +} --- linux/arch/alpha/kernel/proto.h.jj Mon Dec 20 09:10:16 1999 +++ linux/arch/alpha/kernel/proto.h Thu Feb 3 08:12:24 2000 @@ -9,55 +9,65 @@ struct pt_regs; struct task_struct; struct pci_dev; +struct pci_controler; /* core_apecs.c */ extern struct pci_ops apecs_pci_ops; extern void apecs_init_arch(void); extern void apecs_pci_clr_err(void); extern void apecs_machine_check(u64, u64, struct pt_regs *); +extern void apecs_pci_tbi(struct pci_controler *, dma_addr_t, dma_addr_t); /* core_cia.c */ extern struct pci_ops cia_pci_ops; extern void cia_init_arch(void); extern void cia_machine_check(u64, u64, struct pt_regs *); +extern void cia_pci_tbi(struct pci_controler *, dma_addr_t, dma_addr_t); /* core_irongate.c */ extern struct pci_ops irongate_pci_ops; extern int irongate_pci_clr_err(void); extern void irongate_init_arch(void); extern void irongate_machine_check(u64, u64, struct pt_regs *); +#define irongate_pci_tbi ((void *)0) /* core_lca.c */ extern struct pci_ops lca_pci_ops; extern void lca_init_arch(void); extern void lca_machine_check(u64, u64, struct pt_regs *); +extern void lca_pci_tbi(struct pci_controler *, dma_addr_t, dma_addr_t); /* core_mcpcia.c */ extern struct pci_ops mcpcia_pci_ops; extern void mcpcia_init_arch(void); extern void mcpcia_init_hoses(void); extern void mcpcia_machine_check(u64, u64, struct pt_regs *); +extern void mcpcia_pci_tbi(struct pci_controler *, dma_addr_t, dma_addr_t); /* core_polaris.c */ extern struct pci_ops polaris_pci_ops; extern void polaris_init_arch(void); extern void polaris_machine_check(u64, u64, struct pt_regs *); +extern void polaris_pci_tbi(struct pci_controler *, dma_addr_t, dma_addr_t); /* core_pyxis.c */ extern struct pci_ops pyxis_pci_ops; extern void pyxis_init_arch(void); extern void pyxis_machine_check(u64, u64, struct pt_regs *); +extern void pyxis_pci_tbi(struct pci_controler *, dma_addr_t, dma_addr_t); /* core_t2.c */ extern struct pci_ops t2_pci_ops; extern void t2_init_arch(void); extern void t2_machine_check(u64, u64, struct pt_regs *); +#define t2_pci_tbi ((void *)0) /* core_tsunami.c */ extern struct pci_ops tsunami_pci_ops; extern void tsunami_init_arch(void); extern void tsunami_kill_arch(int); extern void tsunami_machine_check(u64, u64, struct pt_regs *); +extern void tsunami_pci_tbi(struct pci_controler *, dma_addr_t, dma_addr_t); /* setup.c */ extern unsigned long srm_hae; --- linux/include/asm-alpha/io.h.jj Mon Dec 20 09:28:20 1999 +++ linux/include/asm-alpha/io.h Thu Feb 3 08:13:49 2000 @@ -430,7 +430,7 @@ out: #endif #define RTC_ALWAYS_BCD 0 -/* Nothing to do */ +/* DMA Cache coherency. Nothing to do. */ #define dma_cache_inv(_start,_size) do { } while (0) #define dma_cache_wback(_start,_size) do { } while (0) --- linux/include/asm-alpha/machvec.h.jj Mon Dec 20 09:28:22 1999 +++ linux/include/asm-alpha/machvec.h Thu Feb 3 08:12:24 2000 @@ -21,6 +21,7 @@ struct vm_area_struct; struct linux_hose_info; struct pci_dev; struct pci_ops; +struct pci_controler; struct alpha_machine_vector { @@ -41,6 +42,8 @@ struct alpha_machine_vector unsigned long (*mv_virt_to_bus)(void *); void * (*mv_bus_to_virt)(unsigned long); + void (*mv_pci_tbi)(struct pci_controler *hose, + dma_addr_t start, dma_addr_t end); unsigned int (*mv_inb)(unsigned long); unsigned int (*mv_inw)(unsigned long); --- linux/include/asm-alpha/pci.h.jj Fri Jan 14 09:44:35 2000 +++ linux/include/asm-alpha/pci.h Thu Feb 3 08:41:07 2000 @@ -1,36 +1,136 @@ #ifndef __ALPHA_PCI_H #define __ALPHA_PCI_H +#include +#include #include -/* - * The following structure is used to manage multiple PCI busses. - */ +/* Override the logic in pci_scan_bus for skipping already-configured + bus numbers. */ + +#define pcibios_assign_all_busses() 1 + +#define PCIBIOS_MIN_IO alpha_mv.min_io_address +#define PCIBIOS_MIN_MEM alpha_mv.min_mem_address struct pci_bus; +struct pci_dev; struct resource; +/* An IOMMU allocation arena. There are typically two of these + regions per bus. */ +/* ??? The 8400 has a 32-byte pte entry, and the entire table apparently + lives directly on the host bridge (no tlb?). We don't support this + machine, but if we ever did, we'd need to parameterize all this quite + a bit further. Probably with per-bus operation tables. */ + +struct iommu_arena +{ + spinlock_t lock; + unsigned long *ptes; + dma_addr_t dma_base; + unsigned int size; + unsigned int alloc_hint; +}; + +/* A controler. Used to manage multiple PCI busses. */ + struct pci_controler { - /* Mandated. */ struct pci_controler *next; struct pci_bus *bus; struct resource *io_space; struct resource *mem_space; - /* Alpha specific. */ unsigned long config_space; unsigned int index; unsigned int first_busno; unsigned int last_busno; + +#ifdef NEW_PCI_DMA_MAP + dma_addr_t direct_map_base; + unsigned int direct_map_size; + struct iommu_arena *sg_pci; + struct iommu_arena *sg_isa; +#endif }; -/* Override the logic in pci_scan_bus for skipping already-configured - bus numbers. */ +/* IOMMU controls. */ -#define pcibios_assign_all_busses() 1 - -#define PCIBIOS_MIN_IO alpha_mv.min_io_address -#define PCIBIOS_MIN_MEM alpha_mv.min_mem_address +/* Allocate and map kernel buffer using consistent mode DMA for PCI + device. Returns non-NULL cpu-view pointer to the buffer if + successful and sets *DMA_ADDRP to the pci side dma address as well, + else DMA_ADDRP is undefined. */ + +extern void *pci_alloc_consistent(struct pci_dev *, size_t, dma_addr_t *); + +/* Free and unmap a consistent DMA buffer. CPU_ADDR and DMA_ADDR must + be values that were returned from pci_alloc_consistent. SIZE must + be the same as what as passed into pci_alloc_consistent. + References to the memory and mappings assosciated with CPU_ADDR or + DMA_ADDR past this call are illegal. */ + +extern void pci_free_consistent(struct pci_dev *, size_t, void *, dma_addr_t); + +/* Map a single buffer of the indicate size for PCI DMA in streaming + mode. The 32-bit PCI bus mastering address to use is returned. + Once the device is given the dma address, the device owns this memory + until either pci_unmap_single or pci_sync_single is performed. */ + +extern dma_addr_t pci_map_single(struct pci_dev *, void *, size_t); + +/* Unmap a single streaming mode DMA translation. The DMA_ADDR and + SIZE must match what was provided for in a previous pci_map_single + call. All other usages are undefined. After this call, reads by + the cpu to the buffer are guarenteed to see whatever the device + wrote there. */ + +extern void pci_unmap_single(struct pci_dev *, dma_addr_t, size_t); + +/* Map a set of buffers described by scatterlist in streaming mode for + PCI DMA. This is the scather-gather version of the above + pci_map_single interface. Here the scatter gather list elements + are each tagged with the appropriate PCI dma address and length. + They are obtained via sg_dma_{address,length}(SG). + + NOTE: An implementation may be able to use a smaller number of DMA + address/length pairs than there are SG table elements. (for + example via virtual mapping capabilities) The routine returns the + number of addr/length pairs actually used, at most nents. + + Device ownership issues as mentioned above for pci_map_single are + the same here. */ + +extern int pci_map_sg(struct pci_dev *, struct scatterlist *, int); + +/* Unmap a set of streaming mode DMA translations. Again, cpu read + rules concerning calls here are the same as for pci_unmap_single() + above. */ + +extern void pci_unmap_sg(struct pci_dev *, struct scatterlist *, int); + +/* Make physical memory consistent for a single streaming mode DMA + translation after a transfer. + + If you perform a pci_map_single() but wish to interrogate the + buffer using the cpu, yet do not wish to teardown the PCI dma + mapping, you must call this function before doing so. At the next + point you give the PCI dma address back to the card, the device + again owns the buffer. */ + +extern inline void +pci_dma_sync_single(struct pci_dev *dev, dma_addr_t dma_addr, size_t size) +{ + /* Nothing to do. */ +} + +/* Make physical memory consistent for a set of streaming mode DMA + translations after a transfer. The same as pci_dma_sync_single but + for a scatter-gather list, same rules and usage. */ + +extern inline void +pci_dma_sync_sg(struct pci_dev *dev, struct scatterlist *sg, int nents) +{ + /* Nothing to do. */ +} #endif /* __ALPHA_PCI_H */ - --- linux/include/asm-alpha/scatterlist.h.jj Wed Oct 13 08:05:45 1999 +++ linux/include/asm-alpha/scatterlist.h Thu Feb 3 08:12:24 2000 @@ -1,12 +1,19 @@ #ifndef _ALPHA_SCATTERLIST_H #define _ALPHA_SCATTERLIST_H +#include + struct scatterlist { - char * address; /* Location data is to be transferred to */ - char * alt_address; /* Location of actual if address is a - * dma indirect buffer. NULL otherwise */ - unsigned int length; + char *address; /* Source/target vaddr. */ + char *alt_address; /* Location of actual if address is a + dma indirect buffer, else NULL. */ + dma_addr_t dma_address; + unsigned int length; + unsigned int dma_length; }; + +#define sg_dma_address(sg) ((sg)->dma_address) +#define sg_dma_len(sg) ((sg)->dma_length) #define ISA_DMA_THRESHOLD (~0UL) --- linux/include/asm-alpha/types.h.jj Wed Oct 13 08:05:46 1999 +++ linux/include/asm-alpha/types.h Thu Feb 3 08:12:24 2000 @@ -56,22 +56,14 @@ typedef unsigned short u16; typedef signed int s32; typedef unsigned int u32; -/* - * There are 32-bit compilers for the alpha out there.. - */ -#if ((~0UL) == 0xffffffff) - -typedef signed long long s64; -typedef unsigned long long u64; -#define BITS_PER_LONG 32 - -#else - typedef signed long s64; typedef unsigned long u64; #define BITS_PER_LONG 64 -#endif +/* PCI dma addresses are 32-bits wide. Ignore PCI64 for now, since + we'll typically be sending it all through iommu tables anyway. */ + +typedef u32 dma_addr_t; #endif /* __KERNEL__ */ #endif /* _ALPHA_TYPES_H */ --TB36FDmn/VVEgNH/-- - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/