如何通过ASP.NET反复调用AddImageUrl(url)高效生成PDF文档?

2026-03-30 12:571阅读0评论SEO基础
  • 内容介绍
  • 文章标签
  • 相关推荐

本文共计767个文字,预计阅读时间需要4分钟。

如何通过ASP.NET反复调用AddImageUrl(url)高效生成PDF文档?

我正在使用abcpdf,我想知道我们是否可以递归调用AddImageUrl()函数来编译多个URL的PDF文档?好像可以这样:

csharpint pageCount=0;int theId=theDoc.AddImageUrl(https://stackoverflow.com/search?q=abcpdf+footer+pag);

我正在使用abcpdf,我很好奇我们是否可以递归调用Add ImageUrl()函数来汇编编译多个url的pdf文档?

就像是:

int pageCount = 0; int theId = theDoc.AddImageUrl("stackoverflow.com/search?q=abcpdf+footer+page+x+out+of+", true, 0, true); //assemble document while (theDoc.Chainable(theId)) { theDoc.Page = theDoc.AddPage(); theId = theDoc.AddImageToChain(theId); } pageCount = theDoc.PageCount; Console.WriteLine("1 document page count:" + pageCount); //Flatten document for (int i = 1; i <= pageCount; i++) { theDoc.PageNumber = i; theDoc.Flatten(); } //now try again theId = theDoc.AddImageUrl("stackoverflow.com/questions/1980890/pdf-report-generation", true, 0, true); //assemble document while (theDoc.Chainable(theId)) { theDoc.Page = theDoc.AddPage(); theId = theDoc.AddImageToChain(theId); } Console.WriteLine("2 document page count:" + theDoc.PageCount); //Flatten document for (int i = pageCount + 1; i <= theDoc.PageCount; i++) { theDoc.PageNumber = i; theDoc.Flatten(); } pageCount = theDoc.PageCount;

编辑:
似乎基于’猎人’解决方案工作的代码:

static void Main(string[] args) { Test2(); } static void Test2() { Doc theDoc = new Doc(); // Set minimum number of items a page of HTML should contain. theDoc.HtmlOptions.ContentCount = 10;// Otherwise the page will be assumed to be invalid. theDoc.HtmlOptions.RetryCount = 10; // Try to obtain html page 10 times theDoc.HtmlOptions.Timeout = 180000;// The page must be obtained in less then 10 seconds theDoc.Rect.Inset(0, 10); // set up document theDoc.Rect.Position(5, 15); theDoc.Rect.Width = 602; theDoc.Rect.Height = 767; theDoc.HtmlOptions.PageCacheEnabled = false; IList<string> urls = new List<string>(); urls.Add("stackoverflow.com/search?q=abcpdf+footer+page+x+out+of+"); urls.Add("stackoverflow.com/questions/1980890/pdf-report-generation"); urls.Add("yahoo.com"); urls.Add("stackoverflow.com/questions/4338364/recursively-call-addimageurlurl-to-assemble-pdf-document"); foreach (string url in urls) AddImage(ref theDoc, url); //Flatten document for (int i = 1; i <= theDoc.PageCount; i++) { theDoc.PageNumber = i; theDoc.Flatten(); } theDoc.Save("batchReport.pdf"); theDoc.Clear(); Console.Read(); } static void AddImage(ref Doc theDoc, string url) { int theId = theDoc.AddImageUrl(url, true, 0, true); while (theDoc.Chainable(theId)) { theDoc.Page = theDoc.AddPage(); theId = theDoc.AddImageToChain(theId); // is this right? } Console.WriteLine(string.Format("document page count: {0}", theDoc.PageCount.ToString())); }

编辑2:遗憾的是,生成pdf文档时多次调用AddImageUrl似乎不起作用…

如何通过ASP.NET反复调用AddImageUrl(url)高效生成PDF文档?

终于找到了可靠的解
我们应该在它自己的Doc文档上执行AddImageUrl()函数,而不是在同一个底层文档上执行AddImageUrl()函数,并构建文档集合,最后我们将使用Append()方法将它们组合成一个文档.
这是代码:

static void Main(string[] args) { Test2(); } static void Test2() { Doc theDoc = new Doc(); var urls = new Dictionary<int, string>(); urls.Add(1, "www.asp101.com/samples/server_execute_aspx.asp"); urls.Add(2, "stackoverflow.com/questions/4338364/repeatedly-call-addimageurlurl-to-assemble-pdf-document"); urls.Add(3, "www.google.ca/"); urls.Add(4, "ca.yahoo.com/?p=us"); var theDocs = new List<Doc>(); foreach (int key in urls.Keys) theDocs.Add(GetReport(urls[key])); foreach (var doc in theDocs) { if (theDocs.IndexOf(doc) == 0) theDoc = doc; else theDoc.Append(doc); } theDoc.Save("batchReport.pdf"); theDoc.Clear(); Console.Read(); } static Doc GetReport(string url) { Doc theDoc = new Doc(); // Set minimum number of items a page of HTML should contain. theDoc.HtmlOptions.ContentCount = 10;// Otherwise the page will be assumed to be invalid. theDoc.HtmlOptions.RetryCount = 10; // Try to obtain html page 10 times theDoc.HtmlOptions.Timeout = 180000;// The page must be obtained in less then 10 seconds theDoc.Rect.Inset(0, 10); // set up document theDoc.Rect.Position(5, 15); theDoc.Rect.Width = 602; theDoc.Rect.Height = 767; theDoc.HtmlOptions.PageCacheEnabled = false; int theId = theDoc.AddImageUrl(url, true, 0, true); while (theDoc.Chainable(theId)) { theDoc.Page = theDoc.AddPage(); theId = theDoc.AddImageToChain(theId); } //Flatten document for (int i = 1; i <= theDoc.PageCount; i++) { theDoc.PageNumber = i; theDoc.Flatten(); } return theDoc; } }

本文共计767个文字,预计阅读时间需要4分钟。

如何通过ASP.NET反复调用AddImageUrl(url)高效生成PDF文档?

我正在使用abcpdf,我想知道我们是否可以递归调用AddImageUrl()函数来编译多个URL的PDF文档?好像可以这样:

csharpint pageCount=0;int theId=theDoc.AddImageUrl(https://stackoverflow.com/search?q=abcpdf+footer+pag);

我正在使用abcpdf,我很好奇我们是否可以递归调用Add ImageUrl()函数来汇编编译多个url的pdf文档?

就像是:

int pageCount = 0; int theId = theDoc.AddImageUrl("stackoverflow.com/search?q=abcpdf+footer+page+x+out+of+", true, 0, true); //assemble document while (theDoc.Chainable(theId)) { theDoc.Page = theDoc.AddPage(); theId = theDoc.AddImageToChain(theId); } pageCount = theDoc.PageCount; Console.WriteLine("1 document page count:" + pageCount); //Flatten document for (int i = 1; i <= pageCount; i++) { theDoc.PageNumber = i; theDoc.Flatten(); } //now try again theId = theDoc.AddImageUrl("stackoverflow.com/questions/1980890/pdf-report-generation", true, 0, true); //assemble document while (theDoc.Chainable(theId)) { theDoc.Page = theDoc.AddPage(); theId = theDoc.AddImageToChain(theId); } Console.WriteLine("2 document page count:" + theDoc.PageCount); //Flatten document for (int i = pageCount + 1; i <= theDoc.PageCount; i++) { theDoc.PageNumber = i; theDoc.Flatten(); } pageCount = theDoc.PageCount;

编辑:
似乎基于’猎人’解决方案工作的代码:

static void Main(string[] args) { Test2(); } static void Test2() { Doc theDoc = new Doc(); // Set minimum number of items a page of HTML should contain. theDoc.HtmlOptions.ContentCount = 10;// Otherwise the page will be assumed to be invalid. theDoc.HtmlOptions.RetryCount = 10; // Try to obtain html page 10 times theDoc.HtmlOptions.Timeout = 180000;// The page must be obtained in less then 10 seconds theDoc.Rect.Inset(0, 10); // set up document theDoc.Rect.Position(5, 15); theDoc.Rect.Width = 602; theDoc.Rect.Height = 767; theDoc.HtmlOptions.PageCacheEnabled = false; IList<string> urls = new List<string>(); urls.Add("stackoverflow.com/search?q=abcpdf+footer+page+x+out+of+"); urls.Add("stackoverflow.com/questions/1980890/pdf-report-generation"); urls.Add("yahoo.com"); urls.Add("stackoverflow.com/questions/4338364/recursively-call-addimageurlurl-to-assemble-pdf-document"); foreach (string url in urls) AddImage(ref theDoc, url); //Flatten document for (int i = 1; i <= theDoc.PageCount; i++) { theDoc.PageNumber = i; theDoc.Flatten(); } theDoc.Save("batchReport.pdf"); theDoc.Clear(); Console.Read(); } static void AddImage(ref Doc theDoc, string url) { int theId = theDoc.AddImageUrl(url, true, 0, true); while (theDoc.Chainable(theId)) { theDoc.Page = theDoc.AddPage(); theId = theDoc.AddImageToChain(theId); // is this right? } Console.WriteLine(string.Format("document page count: {0}", theDoc.PageCount.ToString())); }

编辑2:遗憾的是,生成pdf文档时多次调用AddImageUrl似乎不起作用…

如何通过ASP.NET反复调用AddImageUrl(url)高效生成PDF文档?

终于找到了可靠的解
我们应该在它自己的Doc文档上执行AddImageUrl()函数,而不是在同一个底层文档上执行AddImageUrl()函数,并构建文档集合,最后我们将使用Append()方法将它们组合成一个文档.
这是代码:

static void Main(string[] args) { Test2(); } static void Test2() { Doc theDoc = new Doc(); var urls = new Dictionary<int, string>(); urls.Add(1, "www.asp101.com/samples/server_execute_aspx.asp"); urls.Add(2, "stackoverflow.com/questions/4338364/repeatedly-call-addimageurlurl-to-assemble-pdf-document"); urls.Add(3, "www.google.ca/"); urls.Add(4, "ca.yahoo.com/?p=us"); var theDocs = new List<Doc>(); foreach (int key in urls.Keys) theDocs.Add(GetReport(urls[key])); foreach (var doc in theDocs) { if (theDocs.IndexOf(doc) == 0) theDoc = doc; else theDoc.Append(doc); } theDoc.Save("batchReport.pdf"); theDoc.Clear(); Console.Read(); } static Doc GetReport(string url) { Doc theDoc = new Doc(); // Set minimum number of items a page of HTML should contain. theDoc.HtmlOptions.ContentCount = 10;// Otherwise the page will be assumed to be invalid. theDoc.HtmlOptions.RetryCount = 10; // Try to obtain html page 10 times theDoc.HtmlOptions.Timeout = 180000;// The page must be obtained in less then 10 seconds theDoc.Rect.Inset(0, 10); // set up document theDoc.Rect.Position(5, 15); theDoc.Rect.Width = 602; theDoc.Rect.Height = 767; theDoc.HtmlOptions.PageCacheEnabled = false; int theId = theDoc.AddImageUrl(url, true, 0, true); while (theDoc.Chainable(theId)) { theDoc.Page = theDoc.AddPage(); theId = theDoc.AddImageToChain(theId); } //Flatten document for (int i = 1; i <= theDoc.PageCount; i++) { theDoc.PageNumber = i; theDoc.Flatten(); } return theDoc; } }